Compare commits

...

14 Commits

Author SHA1 Message Date
Kit Dallege
f9bbd547cf crypto: add missing kernel-doc for anonymous union members
Document the anonymous SKCIPHER_ALG_COMMON and COMP_ALG_COMMON struct
members in skcipher_alg, scomp_alg, and acomp_alg, following the
existing pattern used by HASH_ALG_COMMON in shash_alg.

This fixes the following kernel-doc warnings:

  include/crypto/skcipher.h:166: struct member 'SKCIPHER_ALG_COMMON' not described in 'skcipher_alg'
  include/crypto/internal/scompress.h:39: struct member 'COMP_ALG_COMMON' not described in 'scomp_alg'
  include/crypto/internal/acompress.h:55: struct member 'COMP_ALG_COMMON' not described in 'acomp_alg'

Signed-off-by: Kit Dallege <xaum.io@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Eric Biggers
7c622c4fa8 crypto: simd - Remove unused skcipher support
Remove the skcipher algorithm support from crypto/simd.c.  It is no
longer used, and it is unlikely to gain any new user in the future,
given the performance issues with this code.

Signed-off-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Thorsten Blum
bab1adf3b8 crypto: atmel-sha204a - Fix potential UAF and memory leak in remove path
Unregister the hwrng to prevent new ->read() calls and flush the Atmel
I2C workqueue before teardown to prevent a potential UAF if a queued
callback runs while the device is being removed.

Drop the early return to ensure sysfs entries are removed and
->hwrng.priv is freed, preventing a memory leak.

Fixes: da001fb651 ("crypto: atmel-i2c - add support for SHA204A random number generator")
Cc: stable@vger.kernel.org
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Daniel Jordan
c8c4a2972f padata: Put CPU offline callback in ONLINE section to allow failure
syzbot reported the following warning:

    DEAD callback error for CPU1
    WARNING: kernel/cpu.c:1463 at _cpu_down+0x759/0x1020 kernel/cpu.c:1463, CPU#0: syz.0.1960/14614

at commit 4ae12d8bd9a8 ("Merge tag 'kbuild-fixes-7.0-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kbuild/linux")
which tglx traced to padata_cpu_dead() given it's the only
sub-CPUHP_TEARDOWN_CPU callback that returns an error.

Failure isn't allowed in hotplug states before CPUHP_TEARDOWN_CPU
so move the CPU offline callback to the ONLINE section where failure is
possible.

Fixes: 894c9ef978 ("padata: validate cpumask without removed CPU during offline")
Reported-by: syzbot+123e1b70473ce213f3af@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/69af0a05.050a0220.310d8.002f.GAE@google.com/
Debugged-by: Thomas Gleixner <tglx@kernel.org>
Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Sun Chaobo
7fc31dd864 crypto: Fix several spelling mistakes in comments
Fix several typos in comments and messages.
No functional change.

Signed-off-by: Sun Chaobo <suncoding913@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Zongyu Wu
b44c7129f1 crypto: hisilicon - add device load query functionality to debugfs
The accelerator device supports usage statistics. This patch enables
obtaining the accelerator's usage through the "dev_usage" file.
The returned number expressed as a percentage as a percentage.

Signed-off-by: Zongyu Wu <wuzongyu1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Randy Dunlap
3414c80977 hwrng: core - avoid kernel-doc warnings
Mark internal fields as "private:" so that kernel-doc comments
are not needed for them, eliminating kernel-doc warnings:

Warning: include/linux/hw_random.h:54 struct member 'list' not described
 in 'hwrng'
Warning: include/linux/hw_random.h:54 struct member 'ref' not described
 in 'hwrng'
Warning: include/linux/hw_random.h:54 struct member 'cleanup_work' not
 described in 'hwrng'
Warning: include/linux/hw_random.h:54 struct member 'cleanup_done' not
 described in 'hwrng'
Warning: include/linux/hw_random.h:54 struct member 'dying' not described
 in 'hwrng'

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Thorsten Blum
344e6a4f7f crypto: nx - fix context leak in nx842_crypto_free_ctx
Since the scomp conversion, nx842_crypto_alloc_ctx() allocates the
context separately, but nx842_crypto_free_ctx() never releases it. Add
the missing kfree(ctx) to nx842_crypto_free_ctx(), and reuse
nx842_crypto_free_ctx() in the allocation error path.

Fixes: 980b5705f4 ("crypto: nx - Migrate to scomp API")
Cc: stable@vger.kernel.org
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Thorsten Blum
adb3faf2db crypto: nx - fix bounce buffer leaks in nx842_crypto_{alloc,free}_ctx
The bounce buffers are allocated with __get_free_pages() using
BOUNCE_BUFFER_ORDER (order 2 = 4 pages), but both the allocation error
path and nx842_crypto_free_ctx() release the buffers with free_page().
Use free_pages() with the matching order instead.

Fixes: ed70b479c2 ("crypto: nx - add hardware 842 crypto comp alg")
Cc: stable@vger.kernel.org
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:59 +09:00
Thorsten Blum
57a13941c0 crypto: atmel-aes - guard unregister on error in atmel_aes_register_algs
Ensure the device supports XTS and GCM with 'has_xts' and 'has_gcm'
before unregistering algorithms when XTS or authenc registration fails,
which would trigger a WARN in crypto_unregister_alg().

Currently, with the capabilities defined in atmel_aes_get_cap(), this
bug cannot happen because all devices that support XTS and authenc also
support GCM, but the error handling should still be correct regardless
of hardware capabilities.

Fixes: d52db5188a ("crypto: atmel-aes - add support to the XTS mode")
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:58 +09:00
George Abraham P
464da0bf19 crypto: qat - add wireless mode support for QAT GEN6
Add wireless mode support for QAT GEN6 devices.

When the WCP_WAT fuse bit is clear, the device operates in wireless
cipher mode (wcy_mode). In this mode all accelerator engines load the
wireless firmware and service configuration via 'cfg_services' sysfs
attribute is restricted to 'sym' only.

The get_accel_cap() function is extended to report wireless-specific
capabilities (ZUC, ZUC-256, 5G, extended algorithm chaining) gated by
their respective slice-disable fuse bits. The set_ssm_wdtimer() function
is updated to configure WCP (wireless cipher) and WAT (wireless
authentication) watchdog timers. The adf_gen6_cfg_dev_init() function is
updated to use adf_6xxx_is_wcy() to enforce sym-only service selection
for WCY devices during initialization.

Co-developed-by: Aviraj Cj <aviraj.cj@intel.com>
Signed-off-by: Aviraj Cj <aviraj.cj@intel.com>
Signed-off-by: George Abraham P <george.abraham.p@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:58 +09:00
Thorsten Blum
3fcfff4ed3 crypto: atmel-aes - Fix 3-page memory leak in atmel_aes_buff_cleanup
atmel_aes_buff_init() allocates 4 pages using __get_free_pages() with
ATMEL_AES_BUFFER_ORDER, but atmel_aes_buff_cleanup() frees only the
first page using free_page(), leaking the remaining 3 pages. Use
free_pages() with ATMEL_AES_BUFFER_ORDER to fix the memory leak.

Fixes: bbe628ed89 ("crypto: atmel-aes - improve performances of data transfer")
Cc: stable@vger.kernel.org
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:58 +09:00
Herbert Xu
2aeec9af77 crypto: tegra - Disable softirqs before finalizing request
Softirqs must be disabled when calling the finalization fucntion on
a request.

Reported-by: Guangwu Zhang <guazhang@redhat.com>
Fixes: 0880bb3b00 ("crypto: tegra - Add Tegra Security Engine driver")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:58 +09:00
Thorsten Blum
326118443e crypto: artpec6 - use memcpy_and_pad to simplify prepare_hash
Use memcpy_and_pad() instead of memcpy() followed by memset() to
simplify artpec6_crypto_prepare_hash().

Also fix a duplicate word in a comment and remove a now-redundant one.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2026-03-22 11:17:58 +09:00
35 changed files with 377 additions and 356 deletions

View File

@@ -50,6 +50,13 @@ Description: Dump debug registers from the QM.
Available for PF and VF in host. VF in guest currently only
has one debug register.
What: /sys/kernel/debug/hisi_hpre/<bdf>/dev_usage
Date: Mar 2026
Contact: linux-crypto@vger.kernel.org
Description: Query the real-time bandwidth usage of device.
Returns the bandwidth usage of each channel on the device.
The returned number is in percentage.
What: /sys/kernel/debug/hisi_hpre/<bdf>/qm/current_q
Date: Sep 2019
Contact: linux-crypto@vger.kernel.org

View File

@@ -24,6 +24,13 @@ Description: The <bdf> is related the function for PF and VF.
1/1000~1000/1000 of total QoS. The driver reading alg_qos to
get related QoS in the host and VM, Such as "cat alg_qos".
What: /sys/kernel/debug/hisi_sec2/<bdf>/dev_usage
Date: Mar 2026
Contact: linux-crypto@vger.kernel.org
Description: Query the real-time bandwidth usage of device.
Returns the bandwidth usage of each channel on the device.
The returned number is in percentage.
What: /sys/kernel/debug/hisi_sec2/<bdf>/qm/qm_regs
Date: Oct 2019
Contact: linux-crypto@vger.kernel.org

View File

@@ -36,6 +36,13 @@ Description: The <bdf> is related the function for PF and VF.
1/1000~1000/1000 of total QoS. The driver reading alg_qos to
get related QoS in the host and VM, Such as "cat alg_qos".
What: /sys/kernel/debug/hisi_zip/<bdf>/dev_usage
Date: Mar 2026
Contact: linux-crypto@vger.kernel.org
Description: Query the real-time bandwidth usage of device.
Returns the bandwidth usage of each channel on the device.
The returned number is in percentage.
What: /sys/kernel/debug/hisi_zip/<bdf>/qm/regs
Date: Nov 2018
Contact: linux-crypto@vger.kernel.org

View File

@@ -1780,7 +1780,7 @@ static inline int __init drbg_healthcheck_sanity(void)
max_addtllen = drbg_max_addtl(drbg);
max_request_bytes = drbg_max_request_bytes(drbg);
drbg_string_fill(&addtl, buf, max_addtllen + 1);
/* overflow addtllen with additonal info string */
/* overflow addtllen with additional info string */
len = drbg_generate(drbg, buf, OUTBUFLEN, &addtl);
BUG_ON(0 < len);
/* overflow max_bits */

View File

@@ -134,7 +134,7 @@ static int lrw_next_index(u32 *counter)
/*
* We compute the tweak masks twice (both before and after the ECB encryption or
* decryption) to avoid having to allocate a temporary buffer and/or make
* mutliple calls to the 'ecb(..)' instance, which usually would be slower than
* multiple calls to the 'ecb(..)' instance, which usually would be slower than
* just doing the lrw_next_index() calls again.
*/
static int lrw_xor_tweak(struct skcipher_request *req, bool second_pass)

View File

@@ -13,11 +13,11 @@
/*
* Shared crypto SIMD helpers. These functions dynamically create and register
* an skcipher or AEAD algorithm that wraps another, internal algorithm. The
* wrapper ensures that the internal algorithm is only executed in a context
* where SIMD instructions are usable, i.e. where may_use_simd() returns true.
* If SIMD is already usable, the wrapper directly calls the internal algorithm.
* Otherwise it defers execution to a workqueue via cryptd.
* an AEAD algorithm that wraps another, internal algorithm. The wrapper
* ensures that the internal algorithm is only executed in a context where SIMD
* instructions are usable, i.e. where may_use_simd() returns true. If SIMD is
* already usable, the wrapper directly calls the internal algorithm. Otherwise
* it defers execution to a workqueue via cryptd.
*
* This is an alternative to the internal algorithm implementing a fallback for
* the !may_use_simd() case itself.
@@ -30,236 +30,11 @@
#include <crypto/cryptd.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/preempt.h>
#include <asm/simd.h>
/* skcipher support */
struct simd_skcipher_alg {
const char *ialg_name;
struct skcipher_alg alg;
};
struct simd_skcipher_ctx {
struct cryptd_skcipher *cryptd_tfm;
};
static int simd_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int key_len)
{
struct simd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
struct crypto_skcipher *child = &ctx->cryptd_tfm->base;
crypto_skcipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
crypto_skcipher_set_flags(child, crypto_skcipher_get_flags(tfm) &
CRYPTO_TFM_REQ_MASK);
return crypto_skcipher_setkey(child, key, key_len);
}
static int simd_skcipher_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct simd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_request *subreq;
struct crypto_skcipher *child;
subreq = skcipher_request_ctx(req);
*subreq = *req;
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_skcipher_queued(ctx->cryptd_tfm)))
child = &ctx->cryptd_tfm->base;
else
child = cryptd_skcipher_child(ctx->cryptd_tfm);
skcipher_request_set_tfm(subreq, child);
return crypto_skcipher_encrypt(subreq);
}
static int simd_skcipher_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct simd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_request *subreq;
struct crypto_skcipher *child;
subreq = skcipher_request_ctx(req);
*subreq = *req;
if (!crypto_simd_usable() ||
(in_atomic() && cryptd_skcipher_queued(ctx->cryptd_tfm)))
child = &ctx->cryptd_tfm->base;
else
child = cryptd_skcipher_child(ctx->cryptd_tfm);
skcipher_request_set_tfm(subreq, child);
return crypto_skcipher_decrypt(subreq);
}
static void simd_skcipher_exit(struct crypto_skcipher *tfm)
{
struct simd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
cryptd_free_skcipher(ctx->cryptd_tfm);
}
static int simd_skcipher_init(struct crypto_skcipher *tfm)
{
struct simd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
struct cryptd_skcipher *cryptd_tfm;
struct simd_skcipher_alg *salg;
struct skcipher_alg *alg;
unsigned reqsize;
alg = crypto_skcipher_alg(tfm);
salg = container_of(alg, struct simd_skcipher_alg, alg);
cryptd_tfm = cryptd_alloc_skcipher(salg->ialg_name,
CRYPTO_ALG_INTERNAL,
CRYPTO_ALG_INTERNAL);
if (IS_ERR(cryptd_tfm))
return PTR_ERR(cryptd_tfm);
ctx->cryptd_tfm = cryptd_tfm;
reqsize = crypto_skcipher_reqsize(cryptd_skcipher_child(cryptd_tfm));
reqsize = max(reqsize, crypto_skcipher_reqsize(&cryptd_tfm->base));
reqsize += sizeof(struct skcipher_request);
crypto_skcipher_set_reqsize(tfm, reqsize);
return 0;
}
struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
const char *algname,
const char *drvname,
const char *basename)
{
struct simd_skcipher_alg *salg;
struct skcipher_alg *alg;
int err;
salg = kzalloc_obj(*salg);
if (!salg) {
salg = ERR_PTR(-ENOMEM);
goto out;
}
salg->ialg_name = basename;
alg = &salg->alg;
err = -ENAMETOOLONG;
if (snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s", algname) >=
CRYPTO_MAX_ALG_NAME)
goto out_free_salg;
if (snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
drvname) >= CRYPTO_MAX_ALG_NAME)
goto out_free_salg;
alg->base.cra_flags = CRYPTO_ALG_ASYNC |
(ialg->base.cra_flags & CRYPTO_ALG_INHERITED_FLAGS);
alg->base.cra_priority = ialg->base.cra_priority;
alg->base.cra_blocksize = ialg->base.cra_blocksize;
alg->base.cra_alignmask = ialg->base.cra_alignmask;
alg->base.cra_module = ialg->base.cra_module;
alg->base.cra_ctxsize = sizeof(struct simd_skcipher_ctx);
alg->ivsize = ialg->ivsize;
alg->chunksize = ialg->chunksize;
alg->min_keysize = ialg->min_keysize;
alg->max_keysize = ialg->max_keysize;
alg->init = simd_skcipher_init;
alg->exit = simd_skcipher_exit;
alg->setkey = simd_skcipher_setkey;
alg->encrypt = simd_skcipher_encrypt;
alg->decrypt = simd_skcipher_decrypt;
err = crypto_register_skcipher(alg);
if (err)
goto out_free_salg;
out:
return salg;
out_free_salg:
kfree(salg);
salg = ERR_PTR(err);
goto out;
}
EXPORT_SYMBOL_GPL(simd_skcipher_create_compat);
void simd_skcipher_free(struct simd_skcipher_alg *salg)
{
crypto_unregister_skcipher(&salg->alg);
kfree(salg);
}
EXPORT_SYMBOL_GPL(simd_skcipher_free);
int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
struct simd_skcipher_alg **simd_algs)
{
int err;
int i;
const char *algname;
const char *drvname;
const char *basename;
struct simd_skcipher_alg *simd;
for (i = 0; i < count; i++) {
if (WARN_ON(strncmp(algs[i].base.cra_name, "__", 2) ||
strncmp(algs[i].base.cra_driver_name, "__", 2)))
return -EINVAL;
}
err = crypto_register_skciphers(algs, count);
if (err)
return err;
for (i = 0; i < count; i++) {
algname = algs[i].base.cra_name + 2;
drvname = algs[i].base.cra_driver_name + 2;
basename = algs[i].base.cra_driver_name;
simd = simd_skcipher_create_compat(algs + i, algname, drvname, basename);
err = PTR_ERR(simd);
if (IS_ERR(simd))
goto err_unregister;
simd_algs[i] = simd;
}
return 0;
err_unregister:
simd_unregister_skciphers(algs, count, simd_algs);
return err;
}
EXPORT_SYMBOL_GPL(simd_register_skciphers_compat);
void simd_unregister_skciphers(struct skcipher_alg *algs, int count,
struct simd_skcipher_alg **simd_algs)
{
int i;
crypto_unregister_skciphers(algs, count);
for (i = 0; i < count; i++) {
if (simd_algs[i]) {
simd_skcipher_free(simd_algs[i]);
simd_algs[i] = NULL;
}
}
}
EXPORT_SYMBOL_GPL(simd_unregister_skciphers);
/* AEAD support */
struct simd_aead_alg {
const char *ialg_name;
struct aead_alg alg;

View File

@@ -2828,7 +2828,7 @@ static int __init tcrypt_mod_init(void)
pr_debug("all tests passed\n");
}
/* We intentionaly return -EAGAIN to prevent keeping the module,
/* We intentionally return -EAGAIN to prevent keeping the module,
* unless we're running in fips mode. It does all its work from
* init() and doesn't offer any runtime functionality, but in
* the fips case, checking for a successful load is helpful.

View File

@@ -2,7 +2,7 @@
/*
* Cryptographic API.
*
* TEA, XTEA, and XETA crypto alogrithms
* TEA, XTEA, and XETA crypto algorithms
*
* The TEA and Xtended TEA algorithms were developed by David Wheeler
* and Roger Needham at the Computer Laboratory of Cambridge University.

View File

@@ -76,7 +76,7 @@ static int xts_setkey(struct crypto_skcipher *parent, const u8 *key,
/*
* We compute the tweak masks twice (both before and after the ECB encryption or
* decryption) to avoid having to allocate a temporary buffer and/or make
* mutliple calls to the 'ecb(..)' instance, which usually would be slower than
* multiple calls to the 'ecb(..)' instance, which usually would be slower than
* just doing the gf128mul_x_ble() calls again.
*/
static int xts_xor_tweak(struct skcipher_request *req, bool second_pass,

View File

@@ -2131,7 +2131,7 @@ static int atmel_aes_buff_init(struct atmel_aes_dev *dd)
static void atmel_aes_buff_cleanup(struct atmel_aes_dev *dd)
{
free_page((unsigned long)dd->buf);
free_pages((unsigned long)dd->buf, ATMEL_AES_BUFFER_ORDER);
}
static int atmel_aes_dma_init(struct atmel_aes_dev *dd)
@@ -2270,10 +2270,12 @@ static int atmel_aes_register_algs(struct atmel_aes_dev *dd)
/* i = ARRAY_SIZE(aes_authenc_algs); */
err_aes_authenc_alg:
crypto_unregister_aeads(aes_authenc_algs, i);
crypto_unregister_skcipher(&aes_xts_alg);
if (dd->caps.has_xts)
crypto_unregister_skcipher(&aes_xts_alg);
#endif
err_aes_xts_alg:
crypto_unregister_aead(&aes_gcm_alg);
if (dd->caps.has_gcm)
crypto_unregister_aead(&aes_gcm_alg);
err_aes_gcm_alg:
i = ARRAY_SIZE(aes_algs);
err_aes_algs:

View File

@@ -194,10 +194,8 @@ static void atmel_sha204a_remove(struct i2c_client *client)
{
struct atmel_i2c_client_priv *i2c_priv = i2c_get_clientdata(client);
if (atomic_read(&i2c_priv->tfm_count)) {
dev_emerg(&client->dev, "Device is busy, will remove it anyhow\n");
return;
}
devm_hwrng_unregister(&client->dev, &i2c_priv->hwrng);
atmel_i2c_flush_queue();
sysfs_remove_group(&client->dev.kobj, &atmel_sha204a_groups);

View File

@@ -1323,7 +1323,7 @@ static int artpec6_crypto_prepare_hash(struct ahash_request *areq)
artpec6_crypto_init_dma_operation(common);
/* Upload HMAC key, must be first the first packet */
/* Upload HMAC key, it must be the first packet */
if (req_ctx->hash_flags & HASH_FLAG_HMAC) {
if (variant == ARTPEC6_CRYPTO) {
req_ctx->key_md = FIELD_PREP(A6_CRY_MD_OPER,
@@ -1333,11 +1333,8 @@ static int artpec6_crypto_prepare_hash(struct ahash_request *areq)
a7_regk_crypto_dlkey);
}
/* Copy and pad up the key */
memcpy(req_ctx->key_buffer, ctx->hmac_key,
ctx->hmac_key_length);
memset(req_ctx->key_buffer + ctx->hmac_key_length, 0,
blocksize - ctx->hmac_key_length);
memcpy_and_pad(req_ctx->key_buffer, blocksize, ctx->hmac_key,
ctx->hmac_key_length, 0);
error = artpec6_crypto_setup_out_descr(common,
(void *)&req_ctx->key_md,

View File

@@ -1040,6 +1040,57 @@ void hisi_qm_show_last_dfx_regs(struct hisi_qm *qm)
}
}
static int qm_usage_percent(struct hisi_qm *qm, int chan_num)
{
u32 val, used_bw, total_bw;
val = readl(qm->io_base + QM_CHANNEL_USAGE_OFFSET +
chan_num * QM_CHANNEL_ADDR_INTRVL);
used_bw = lower_16_bits(val);
total_bw = upper_16_bits(val);
if (!total_bw)
return -EIO;
if (total_bw <= used_bw)
return QM_MAX_DEV_USAGE;
return (used_bw * QM_DEV_USAGE_RATE) / total_bw;
}
static int qm_usage_show(struct seq_file *s, void *unused)
{
struct hisi_qm *qm = s->private;
bool dev_is_active = true;
int i, ret;
/* If device is in suspended, usage is 0. */
ret = hisi_qm_get_dfx_access(qm);
if (ret == -EAGAIN) {
dev_is_active = false;
} else if (ret) {
dev_err(&qm->pdev->dev, "failed to get dfx access for usage_show!\n");
return ret;
}
ret = 0;
for (i = 0; i < qm->channel_data.channel_num; i++) {
if (dev_is_active) {
ret = qm_usage_percent(qm, i);
if (ret < 0) {
hisi_qm_put_dfx_access(qm);
return ret;
}
}
seq_printf(s, "%s: %d\n", qm->channel_data.channel_name[i], ret);
}
if (dev_is_active)
hisi_qm_put_dfx_access(qm);
return 0;
}
DEFINE_SHOW_ATTRIBUTE(qm_usage);
static int qm_diff_regs_show(struct seq_file *s, void *unused)
{
struct hisi_qm *qm = s->private;
@@ -1159,6 +1210,9 @@ void hisi_qm_debug_init(struct hisi_qm *qm)
debugfs_create_file("diff_regs", 0444, qm->debug.qm_d,
qm, &qm_diff_regs_fops);
if (qm->ver >= QM_HW_V5)
debugfs_create_file("dev_usage", 0444, qm->debug.debug_root, qm, &qm_usage_fops);
debugfs_create_file("regs", 0444, qm->debug.qm_d, qm, &qm_regs_fops);
debugfs_create_file("cmd", 0600, qm->debug.qm_d, qm, &qm_cmd_fops);

View File

@@ -121,6 +121,8 @@
#define HPRE_DFX_COMMON2_LEN 0xE
#define HPRE_DFX_CORE_LEN 0x43
#define HPRE_MAX_CHANNEL_NUM 2
static const char hpre_name[] = "hisi_hpre";
static struct dentry *hpre_debugfs_root;
static const struct pci_device_id hpre_dev_ids[] = {
@@ -370,6 +372,11 @@ static struct dfx_diff_registers hpre_diff_regs[] = {
},
};
static const char *hpre_channel_name[HPRE_MAX_CHANNEL_NUM] = {
"RSA",
"ECC",
};
static const struct hisi_qm_err_ini hpre_err_ini;
bool hpre_check_alg_support(struct hisi_qm *qm, u32 alg)
@@ -1234,6 +1241,16 @@ static int hpre_pre_store_cap_reg(struct hisi_qm *qm)
return 0;
}
static void hpre_set_channels(struct hisi_qm *qm)
{
struct qm_channel *channel_data = &qm->channel_data;
int i;
channel_data->channel_num = HPRE_MAX_CHANNEL_NUM;
for (i = 0; i < HPRE_MAX_CHANNEL_NUM; i++)
channel_data->channel_name[i] = hpre_channel_name[i];
}
static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
u64 alg_msk;
@@ -1267,6 +1284,7 @@ static int hpre_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
return ret;
}
hpre_set_channels(qm);
/* Fetch and save the value of capability registers */
ret = hpre_pre_store_cap_reg(qm);
if (ret) {

View File

@@ -133,6 +133,8 @@
#define SEC_AEAD_BITMAP (GENMASK_ULL(7, 6) | GENMASK_ULL(18, 17) | \
GENMASK_ULL(45, 43))
#define SEC_MAX_CHANNEL_NUM 1
struct sec_hw_error {
u32 int_msk;
const char *msg;
@@ -1288,6 +1290,14 @@ static int sec_pre_store_cap_reg(struct hisi_qm *qm)
return 0;
}
static void sec_set_channels(struct hisi_qm *qm)
{
struct qm_channel *channel_data = &qm->channel_data;
channel_data->channel_num = SEC_MAX_CHANNEL_NUM;
channel_data->channel_name[0] = "SEC";
}
static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
u64 alg_msk;
@@ -1325,6 +1335,7 @@ static int sec_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
return ret;
}
sec_set_channels(qm);
/* Fetch and save the value of capability registers */
ret = sec_pre_store_cap_reg(qm);
if (ret) {

View File

@@ -122,6 +122,8 @@
#define HZIP_LIT_LEN_EN_OFFSET 0x301204
#define HZIP_LIT_LEN_EN_EN BIT(4)
#define HZIP_MAX_CHANNEL_NUM 3
enum {
HZIP_HIGH_COMP_RATE,
HZIP_HIGH_COMP_PERF,
@@ -359,6 +361,12 @@ static struct dfx_diff_registers hzip_diff_regs[] = {
},
};
static const char *zip_channel_name[HZIP_MAX_CHANNEL_NUM] = {
"COMPRESS",
"DECOMPRESS",
"DAE"
};
static int hzip_diff_regs_show(struct seq_file *s, void *unused)
{
struct hisi_qm *qm = s->private;
@@ -1400,6 +1408,16 @@ static int zip_pre_store_cap_reg(struct hisi_qm *qm)
return 0;
}
static void zip_set_channels(struct hisi_qm *qm)
{
struct qm_channel *channel_data = &qm->channel_data;
int i;
channel_data->channel_num = HZIP_MAX_CHANNEL_NUM;
for (i = 0; i < HZIP_MAX_CHANNEL_NUM; i++)
channel_data->channel_name[i] = zip_channel_name[i];
}
static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
{
u64 alg_msk;
@@ -1438,6 +1456,7 @@ static int hisi_zip_qm_init(struct hisi_qm *qm, struct pci_dev *pdev)
return ret;
}
zip_set_channels(qm);
/* Fetch and save the value of capability registers */
ret = zip_pre_store_cap_reg(qm);
if (ret) {

View File

@@ -82,10 +82,15 @@ static const unsigned long thrd_mask_dcpr[ADF_6XXX_MAX_ACCELENGINES] = {
0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x00
};
static const unsigned long thrd_mask_wcy[ADF_6XXX_MAX_ACCELENGINES] = {
0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x7F, 0x00
};
static const char *const adf_6xxx_fw_objs[] = {
[ADF_FW_CY_OBJ] = ADF_6XXX_CY_OBJ,
[ADF_FW_DC_OBJ] = ADF_6XXX_DC_OBJ,
[ADF_FW_ADMIN_OBJ] = ADF_6XXX_ADMIN_OBJ,
[ADF_FW_WCY_OBJ] = ADF_6XXX_WCY_OBJ,
};
static const struct adf_fw_config adf_default_fw_config[] = {
@@ -94,6 +99,12 @@ static const struct adf_fw_config adf_default_fw_config[] = {
{ ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ },
};
static const struct adf_fw_config adf_wcy_fw_config[] = {
{ ADF_AE_GROUP_1, ADF_FW_WCY_OBJ },
{ ADF_AE_GROUP_0, ADF_FW_WCY_OBJ },
{ ADF_AE_GROUP_2, ADF_FW_ADMIN_OBJ },
};
static struct adf_hw_device_class adf_6xxx_class = {
.name = ADF_6XXX_DEVICE_NAME,
.type = DEV_6XXX,
@@ -118,6 +129,12 @@ static bool services_supported(unsigned long mask)
}
}
static bool wcy_services_supported(unsigned long mask)
{
/* The wireless SKU supports only the symmetric crypto service */
return mask == BIT(SVC_SYM);
}
static int get_service(unsigned long *mask)
{
if (test_and_clear_bit(SVC_ASYM, mask))
@@ -155,8 +172,12 @@ static enum adf_cfg_service_type get_ring_type(unsigned int service)
}
}
static const unsigned long *get_thrd_mask(unsigned int service)
static const unsigned long *get_thrd_mask(struct adf_accel_dev *accel_dev,
unsigned int service)
{
if (adf_6xxx_is_wcy(GET_HW_DATA(accel_dev)))
return (service == SVC_SYM) ? thrd_mask_wcy : NULL;
switch (service) {
case SVC_SYM:
return thrd_mask_sym;
@@ -194,7 +215,7 @@ static int get_rp_config(struct adf_accel_dev *accel_dev, struct adf_ring_config
return service;
rp_config[i].ring_type = get_ring_type(service);
rp_config[i].thrd_mask = get_thrd_mask(service);
rp_config[i].thrd_mask = get_thrd_mask(accel_dev, service);
/*
* If there is only one service enabled, use all ring pairs for
@@ -386,6 +407,8 @@ static void set_ssm_wdtimer(struct adf_accel_dev *accel_dev)
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTCNVL_OFFSET, ADF_SSMWDTCNVH_OFFSET, val);
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTUCSL_OFFSET, ADF_SSMWDTUCSH_OFFSET, val);
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTDCPRL_OFFSET, ADF_SSMWDTDCPRH_OFFSET, val);
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTWCPL_OFFSET, ADF_SSMWDTWCPH_OFFSET, val);
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTWATL_OFFSET, ADF_SSMWDTWATH_OFFSET, val);
/* Enable watchdog timer for pke */
ADF_CSR_WR64_LO_HI(addr, ADF_SSMWDTPKEL_OFFSET, ADF_SSMWDTPKEH_OFFSET, val_pke);
@@ -631,6 +654,12 @@ static int adf_gen6_set_vc(struct adf_accel_dev *accel_dev)
return set_vc_config(accel_dev);
}
static const struct adf_fw_config *get_fw_config(struct adf_accel_dev *accel_dev)
{
return adf_6xxx_is_wcy(GET_HW_DATA(accel_dev)) ? adf_wcy_fw_config :
adf_default_fw_config;
}
static u32 get_ae_mask(struct adf_hw_device_data *self)
{
unsigned long fuses = self->fuses[ADF_FUSECTL4];
@@ -653,6 +682,38 @@ static u32 get_ae_mask(struct adf_hw_device_data *self)
return mask;
}
static u32 get_accel_cap_wcy(struct adf_accel_dev *accel_dev)
{
u32 capabilities_sym;
u32 fuse;
fuse = GET_HW_DATA(accel_dev)->fuses[ADF_FUSECTL1];
capabilities_sym = ICP_ACCEL_CAPABILITIES_CRYPTO_SYMMETRIC |
ICP_ACCEL_CAPABILITIES_CIPHER |
ICP_ACCEL_CAPABILITIES_AUTHENTICATION |
ICP_ACCEL_CAPABILITIES_WIRELESS_CRYPTO_EXT |
ICP_ACCEL_CAPABILITIES_5G |
ICP_ACCEL_CAPABILITIES_ZUC |
ICP_ACCEL_CAPABILITIES_ZUC_256 |
ICP_ACCEL_CAPABILITIES_EXT_ALGCHAIN;
if (fuse & ICP_ACCEL_GEN6_MASK_EIA3_SLICE) {
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC;
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
}
if (fuse & ICP_ACCEL_GEN6_MASK_ZUC_256_SLICE)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_ZUC_256;
if (fuse & ICP_ACCEL_GEN6_MASK_5G_SLICE)
capabilities_sym &= ~ICP_ACCEL_CAPABILITIES_5G;
if (adf_get_service_enabled(accel_dev) == SVC_SYM)
return capabilities_sym;
return 0;
}
static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
{
u32 capabilities_sym, capabilities_asym;
@@ -661,6 +722,9 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
u32 caps = 0;
u32 fusectl1;
if (adf_6xxx_is_wcy(GET_HW_DATA(accel_dev)))
return get_accel_cap_wcy(accel_dev);
fusectl1 = GET_HW_DATA(accel_dev)->fuses[ADF_FUSECTL1];
/* Read accelerator capabilities mask */
@@ -733,15 +797,19 @@ static u32 get_accel_cap(struct adf_accel_dev *accel_dev)
static u32 uof_get_num_objs(struct adf_accel_dev *accel_dev)
{
return ARRAY_SIZE(adf_default_fw_config);
return adf_6xxx_is_wcy(GET_HW_DATA(accel_dev)) ?
ARRAY_SIZE(adf_wcy_fw_config) :
ARRAY_SIZE(adf_default_fw_config);
}
static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num)
{
int num_fw_objs = ARRAY_SIZE(adf_6xxx_fw_objs);
const struct adf_fw_config *fw_config;
int id;
id = adf_default_fw_config[obj_num].obj;
fw_config = get_fw_config(accel_dev);
id = fw_config[obj_num].obj;
if (id >= num_fw_objs)
return NULL;
@@ -755,15 +823,22 @@ static const char *uof_get_name_6xxx(struct adf_accel_dev *accel_dev, u32 obj_nu
static int uof_get_obj_type(struct adf_accel_dev *accel_dev, u32 obj_num)
{
const struct adf_fw_config *fw_config;
if (obj_num >= uof_get_num_objs(accel_dev))
return -EINVAL;
return adf_default_fw_config[obj_num].obj;
fw_config = get_fw_config(accel_dev);
return fw_config[obj_num].obj;
}
static u32 uof_get_ae_mask(struct adf_accel_dev *accel_dev, u32 obj_num)
{
return adf_default_fw_config[obj_num].ae_mask;
const struct adf_fw_config *fw_config;
fw_config = get_fw_config(accel_dev);
return fw_config[obj_num].ae_mask;
}
static const u32 *adf_get_arbiter_mapping(struct adf_accel_dev *accel_dev)
@@ -873,6 +948,14 @@ static void adf_gen6_init_rl_data(struct adf_rl_hw_data *rl_data)
init_num_svc_aes(rl_data);
}
static void adf_gen6_init_services_supported(struct adf_hw_device_data *hw_data)
{
if (adf_6xxx_is_wcy(hw_data))
hw_data->services_supported = wcy_services_supported;
else
hw_data->services_supported = services_supported;
}
void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &adf_6xxx_class;
@@ -929,11 +1012,11 @@ void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data)
hw_data->stop_timer = adf_timer_stop;
hw_data->init_device = adf_init_device;
hw_data->enable_pm = enable_pm;
hw_data->services_supported = services_supported;
hw_data->num_rps = ADF_GEN6_ETR_MAX_BANKS;
hw_data->clock_frequency = ADF_6XXX_AE_FREQ;
hw_data->get_svc_slice_cnt = adf_gen6_get_svc_slice_cnt;
adf_gen6_init_services_supported(hw_data);
adf_gen6_init_hw_csr_ops(&hw_data->csr_ops);
adf_gen6_init_pf_pfvf_ops(&hw_data->pfvf_ops);
adf_gen6_init_dc_ops(&hw_data->dc_ops);

View File

@@ -64,10 +64,14 @@
#define ADF_SSMWDTATHH_OFFSET 0x520C
#define ADF_SSMWDTCNVL_OFFSET 0x5408
#define ADF_SSMWDTCNVH_OFFSET 0x540C
#define ADF_SSMWDTWCPL_OFFSET 0x5608
#define ADF_SSMWDTWCPH_OFFSET 0x560C
#define ADF_SSMWDTUCSL_OFFSET 0x5808
#define ADF_SSMWDTUCSH_OFFSET 0x580C
#define ADF_SSMWDTDCPRL_OFFSET 0x5A08
#define ADF_SSMWDTDCPRH_OFFSET 0x5A0C
#define ADF_SSMWDTWATL_OFFSET 0x5C08
#define ADF_SSMWDTWATH_OFFSET 0x5C0C
#define ADF_SSMWDTPKEL_OFFSET 0x5E08
#define ADF_SSMWDTPKEH_OFFSET 0x5E0C
@@ -139,6 +143,7 @@
#define ADF_6XXX_CY_OBJ "qat_6xxx_cy.bin"
#define ADF_6XXX_DC_OBJ "qat_6xxx_dc.bin"
#define ADF_6XXX_ADMIN_OBJ "qat_6xxx_admin.bin"
#define ADF_6XXX_WCY_OBJ "qat_6xxx_wcy.bin"
/* RL constants */
#define ADF_6XXX_RL_PCIE_SCALE_FACTOR_DIV 100
@@ -159,9 +164,18 @@ enum icp_qat_gen6_slice_mask {
ICP_ACCEL_GEN6_MASK_PKE_SLICE = BIT(2),
ICP_ACCEL_GEN6_MASK_CPR_SLICE = BIT(3),
ICP_ACCEL_GEN6_MASK_DCPRZ_SLICE = BIT(4),
ICP_ACCEL_GEN6_MASK_EIA3_SLICE = BIT(5),
ICP_ACCEL_GEN6_MASK_WCP_WAT_SLICE = BIT(6),
ICP_ACCEL_GEN6_MASK_ZUC_256_SLICE = BIT(7),
ICP_ACCEL_GEN6_MASK_5G_SLICE = BIT(8),
};
/* Return true if the device is a wireless crypto (WCY) SKU */
static inline bool adf_6xxx_is_wcy(struct adf_hw_device_data *hw_data)
{
return !(hw_data->fuses[ADF_FUSECTL1] & ICP_ACCEL_GEN6_MASK_WCP_WAT_SLICE);
}
void adf_init_hw_data_6xxx(struct adf_hw_device_data *hw_data);
void adf_clean_hw_data_6xxx(struct adf_hw_device_data *hw_data);

View File

@@ -16,6 +16,7 @@
#include "adf_gen6_shared.h"
#include "adf_6xxx_hw_data.h"
#include "adf_heartbeat.h"
static int bar_map[] = {
0, /* SRAM */
@@ -53,6 +54,35 @@ static void adf_devmgr_remove(void *accel_dev)
adf_devmgr_rm_dev(accel_dev, NULL);
}
static int adf_gen6_cfg_dev_init(struct adf_accel_dev *accel_dev)
{
const char *config;
int ret;
/*
* Wireless SKU - symmetric crypto service only
* Non-wireless SKU - crypto service for even devices and compression for odd devices
*/
if (adf_6xxx_is_wcy(GET_HW_DATA(accel_dev)))
config = ADF_CFG_SYM;
else
config = accel_dev->accel_id % 2 ? ADF_CFG_DC : ADF_CFG_CY;
ret = adf_cfg_section_add(accel_dev, ADF_GENERAL_SEC);
if (ret)
return ret;
ret = adf_cfg_add_key_value_param(accel_dev, ADF_GENERAL_SEC,
ADF_SERVICES_ENABLED, config,
ADF_STR);
if (ret)
return ret;
adf_heartbeat_save_cfg_param(accel_dev, ADF_CFG_HB_TIMER_MIN_MS);
return 0;
}
static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
{
struct adf_accel_pci *accel_pci_dev;
@@ -91,9 +121,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
pci_read_config_dword(pdev, ADF_GEN6_FUSECTL0_OFFSET, &hw_data->fuses[ADF_FUSECTL0]);
pci_read_config_dword(pdev, ADF_GEN6_FUSECTL1_OFFSET, &hw_data->fuses[ADF_FUSECTL1]);
if (!(hw_data->fuses[ADF_FUSECTL1] & ICP_ACCEL_GEN6_MASK_WCP_WAT_SLICE))
return dev_err_probe(dev, -EFAULT, "Wireless mode is not supported.\n");
/* Enable PCI device */
ret = pcim_enable_device(pdev);
if (ret)

View File

@@ -9,6 +9,7 @@ enum adf_fw_objs {
ADF_FW_DC_OBJ,
ADF_FW_ADMIN_OBJ,
ADF_FW_CY_OBJ,
ADF_FW_WCY_OBJ,
};
struct adf_fw_config {

View File

@@ -31,12 +31,6 @@ void adf_gen6_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops)
}
EXPORT_SYMBOL_GPL(adf_gen6_init_hw_csr_ops);
int adf_gen6_cfg_dev_init(struct adf_accel_dev *accel_dev)
{
return adf_gen4_cfg_dev_init(accel_dev);
}
EXPORT_SYMBOL_GPL(adf_gen6_cfg_dev_init);
int adf_gen6_comp_dev_config(struct adf_accel_dev *accel_dev)
{
return adf_comp_dev_config(accel_dev);

View File

@@ -10,7 +10,6 @@ struct adf_pfvf_ops;
void adf_gen6_init_pf_pfvf_ops(struct adf_pfvf_ops *pfvf_ops);
void adf_gen6_init_hw_csr_ops(struct adf_hw_csr_ops *csr_ops);
int adf_gen6_cfg_dev_init(struct adf_accel_dev *accel_dev);
int adf_gen6_comp_dev_config(struct adf_accel_dev *accel_dev);
int adf_gen6_no_dev_config(struct adf_accel_dev *accel_dev);
void adf_gen6_init_vf_mig_ops(struct qat_migdev_ops *vfmig_ops);

View File

@@ -94,7 +94,8 @@ enum icp_qat_capabilities_mask {
ICP_ACCEL_CAPABILITIES_AUTHENTICATION = BIT(3),
ICP_ACCEL_CAPABILITIES_RESERVED_1 = BIT(4),
ICP_ACCEL_CAPABILITIES_COMPRESSION = BIT(5),
/* Bits 6-7 are currently reserved */
/* Bit 6 is currently reserved */
ICP_ACCEL_CAPABILITIES_5G = BIT(7),
ICP_ACCEL_CAPABILITIES_ZUC = BIT(8),
ICP_ACCEL_CAPABILITIES_SHA3 = BIT(9),
/* Bits 10-11 are currently reserved */

View File

@@ -115,10 +115,7 @@ void *nx842_crypto_alloc_ctx(struct nx842_driver *driver)
ctx->sbounce = (u8 *)__get_free_pages(GFP_KERNEL, BOUNCE_BUFFER_ORDER);
ctx->dbounce = (u8 *)__get_free_pages(GFP_KERNEL, BOUNCE_BUFFER_ORDER);
if (!ctx->wmem || !ctx->sbounce || !ctx->dbounce) {
kfree(ctx->wmem);
free_page((unsigned long)ctx->sbounce);
free_page((unsigned long)ctx->dbounce);
kfree(ctx);
nx842_crypto_free_ctx(ctx);
return ERR_PTR(-ENOMEM);
}
@@ -131,8 +128,9 @@ void nx842_crypto_free_ctx(void *p)
struct nx842_crypto_ctx *ctx = p;
kfree(ctx->wmem);
free_page((unsigned long)ctx->sbounce);
free_page((unsigned long)ctx->dbounce);
free_pages((unsigned long)ctx->sbounce, BOUNCE_BUFFER_ORDER);
free_pages((unsigned long)ctx->dbounce, BOUNCE_BUFFER_ORDER);
kfree(ctx);
}
EXPORT_SYMBOL_GPL(nx842_crypto_free_ctx);

View File

@@ -4,6 +4,7 @@
* Crypto driver to handle block cipher algorithms using NVIDIA Security Engine.
*/
#include <linux/bottom_half.h>
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
@@ -333,7 +334,9 @@ out:
tegra_key_invalidate_reserved(ctx->se, key2_id, ctx->alg);
out_finalize:
local_bh_disable();
crypto_finalize_skcipher_request(se->engine, req, ret);
local_bh_enable();
return 0;
}
@@ -1261,7 +1264,9 @@ out_free_inbuf:
tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
out_finalize:
local_bh_disable();
crypto_finalize_aead_request(ctx->se->engine, req, ret);
local_bh_enable();
return 0;
}
@@ -1347,7 +1352,9 @@ out_free_inbuf:
tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
out_finalize:
local_bh_disable();
crypto_finalize_aead_request(ctx->se->engine, req, ret);
local_bh_enable();
return 0;
}
@@ -1745,7 +1752,9 @@ out:
if (tegra_key_is_reserved(rctx->key_id))
tegra_key_invalidate_reserved(ctx->se, rctx->key_id, ctx->alg);
local_bh_disable();
crypto_finalize_hash_request(se->engine, req, ret);
local_bh_enable();
return 0;
}

View File

@@ -4,6 +4,7 @@
* Crypto driver to handle HASH algorithms using NVIDIA Security Engine.
*/
#include <linux/bottom_half.h>
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/module.h>
@@ -546,7 +547,9 @@ static int tegra_sha_do_one_req(struct crypto_engine *engine, void *areq)
}
out:
local_bh_disable();
crypto_finalize_hash_request(se->engine, req, ret);
local_bh_enable();
return 0;
}

View File

@@ -42,6 +42,7 @@
*
* @base: Common crypto API algorithm data structure
* @calg: Cmonn algorithm data structure shared with scomp
* @COMP_ALG_COMMON: see struct comp_alg_common
*/
struct acomp_alg {
int (*compress)(struct acomp_req *req);

View File

@@ -22,6 +22,7 @@ struct crypto_scomp {
* @decompress: Function performs a de-compress operation
* @streams: Per-cpu memory for algorithm
* @calg: Cmonn algorithm data structure shared with acomp
* @COMP_ALG_COMMON: see struct comp_alg_common
*/
struct scomp_alg {
int (*compress)(struct crypto_scomp *tfm, const u8 *src,

View File

@@ -10,25 +10,6 @@
#include <linux/percpu.h>
#include <linux/types.h>
/* skcipher support */
struct simd_skcipher_alg;
struct skcipher_alg;
struct simd_skcipher_alg *simd_skcipher_create_compat(struct skcipher_alg *ialg,
const char *algname,
const char *drvname,
const char *basename);
void simd_skcipher_free(struct simd_skcipher_alg *alg);
int simd_register_skciphers_compat(struct skcipher_alg *algs, int count,
struct simd_skcipher_alg **simd_algs);
void simd_unregister_skciphers(struct skcipher_alg *algs, int count,
struct simd_skcipher_alg **simd_algs);
/* AEAD support */
struct simd_aead_alg;
struct aead_alg;

View File

@@ -145,6 +145,7 @@ struct skcipher_alg_common SKCIPHER_ALG_COMMON;
* considerably more efficient if it can operate on multiple chunks
* in parallel. Should be a multiple of chunksize.
* @co: see struct skcipher_alg_common
* @SKCIPHER_ALG_COMMON: see struct skcipher_alg_common
*
* All fields except @ivsize are mandatory and must be filled.
*/

View File

@@ -92,7 +92,6 @@ enum cpuhp_state {
CPUHP_NET_DEV_DEAD,
CPUHP_IOMMU_IOVA_DEAD,
CPUHP_AP_ARM_CACHE_B15_RAC_DEAD,
CPUHP_PADATA_DEAD,
CPUHP_AP_DTPM_CPU_DEAD,
CPUHP_RANDOM_PREPARE,
CPUHP_WORKQUEUE_PREP,

View File

@@ -102,6 +102,12 @@
#define QM_MIG_REGION_SEL 0x100198
#define QM_MIG_REGION_EN BIT(0)
#define QM_MAX_CHANNEL_NUM 8
#define QM_CHANNEL_USAGE_OFFSET 0x1100
#define QM_MAX_DEV_USAGE 100
#define QM_DEV_USAGE_RATE 100
#define QM_CHANNEL_ADDR_INTRVL 0x4
/* uacce mode of the driver */
#define UACCE_MODE_NOUACCE 0 /* don't use uacce */
#define UACCE_MODE_SVA 1 /* use uacce sva mode */
@@ -359,6 +365,11 @@ struct qm_rsv_buf {
struct qm_dma qcdma;
};
struct qm_channel {
int channel_num;
const char *channel_name[QM_MAX_CHANNEL_NUM];
};
struct hisi_qm {
enum qm_hw_ver ver;
enum qm_fun_type fun_type;
@@ -433,6 +444,7 @@ struct hisi_qm {
struct qm_err_isolate isolate_data;
struct hisi_qm_cap_tables cap_tables;
struct qm_channel channel_data;
};
struct hisi_qp_status {

View File

@@ -46,7 +46,7 @@ struct hwrng {
unsigned long priv;
unsigned short quality;
/* internal. */
/* private: internal. */
struct list_head list;
struct kref ref;
struct work_struct cleanup_work;

View File

@@ -149,23 +149,23 @@ struct padata_mt_job {
/**
* struct padata_instance - The overall control structure.
*
* @cpu_online_node: Linkage for CPU online callback.
* @cpu_dead_node: Linkage for CPU offline callback.
* @cpuhp_node: Linkage for CPU hotplug callbacks.
* @parallel_wq: The workqueue used for parallel work.
* @serial_wq: The workqueue used for serial work.
* @pslist: List of padata_shell objects attached to this instance.
* @cpumask: User supplied cpumasks for parallel and serial works.
* @validate_cpumask: Internal cpumask used to validate @cpumask during hotplug.
* @kobj: padata instance kernel object.
* @lock: padata instance lock.
* @flags: padata flags.
*/
struct padata_instance {
struct hlist_node cpu_online_node;
struct hlist_node cpu_dead_node;
struct hlist_node cpuhp_node;
struct workqueue_struct *parallel_wq;
struct workqueue_struct *serial_wq;
struct list_head pslist;
struct padata_cpumask cpumask;
cpumask_var_t validate_cpumask;
struct kobject kobj;
struct mutex lock;
u8 flags;

View File

@@ -535,7 +535,8 @@ static void padata_init_reorder_list(struct parallel_data *pd)
}
/* Allocate and initialize the internal cpumask dependend resources. */
static struct parallel_data *padata_alloc_pd(struct padata_shell *ps)
static struct parallel_data *padata_alloc_pd(struct padata_shell *ps,
int offlining_cpu)
{
struct padata_instance *pinst = ps->pinst;
struct parallel_data *pd;
@@ -561,6 +562,10 @@ static struct parallel_data *padata_alloc_pd(struct padata_shell *ps)
cpumask_and(pd->cpumask.pcpu, pinst->cpumask.pcpu, cpu_online_mask);
cpumask_and(pd->cpumask.cbcpu, pinst->cpumask.cbcpu, cpu_online_mask);
if (offlining_cpu >= 0) {
__cpumask_clear_cpu(offlining_cpu, pd->cpumask.pcpu);
__cpumask_clear_cpu(offlining_cpu, pd->cpumask.cbcpu);
}
padata_init_reorder_list(pd);
padata_init_squeues(pd);
@@ -607,11 +612,11 @@ static void __padata_stop(struct padata_instance *pinst)
}
/* Replace the internal control structure with a new one. */
static int padata_replace_one(struct padata_shell *ps)
static int padata_replace_one(struct padata_shell *ps, int offlining_cpu)
{
struct parallel_data *pd_new;
pd_new = padata_alloc_pd(ps);
pd_new = padata_alloc_pd(ps, offlining_cpu);
if (!pd_new)
return -ENOMEM;
@@ -621,7 +626,7 @@ static int padata_replace_one(struct padata_shell *ps)
return 0;
}
static int padata_replace(struct padata_instance *pinst)
static int padata_replace(struct padata_instance *pinst, int offlining_cpu)
{
struct padata_shell *ps;
int err = 0;
@@ -629,7 +634,7 @@ static int padata_replace(struct padata_instance *pinst)
pinst->flags |= PADATA_RESET;
list_for_each_entry(ps, &pinst->pslist, list) {
err = padata_replace_one(ps);
err = padata_replace_one(ps, offlining_cpu);
if (err)
break;
}
@@ -646,9 +651,21 @@ static int padata_replace(struct padata_instance *pinst)
/* If cpumask contains no active cpu, we mark the instance as invalid. */
static bool padata_validate_cpumask(struct padata_instance *pinst,
const struct cpumask *cpumask)
const struct cpumask *cpumask,
int offlining_cpu)
{
if (!cpumask_intersects(cpumask, cpu_online_mask)) {
cpumask_copy(pinst->validate_cpumask, cpu_online_mask);
/*
* @offlining_cpu is still in cpu_online_mask, so remove it here for
* validation. Using a sub-CPUHP_TEARDOWN_CPU hotplug state where
* @offlining_cpu wouldn't be in the online mask doesn't work because
* padata_cpu_offline() can fail but such a state doesn't allow failure.
*/
if (offlining_cpu >= 0)
__cpumask_clear_cpu(offlining_cpu, pinst->validate_cpumask);
if (!cpumask_intersects(cpumask, pinst->validate_cpumask)) {
pinst->flags |= PADATA_INVALID;
return false;
}
@@ -664,13 +681,13 @@ static int __padata_set_cpumasks(struct padata_instance *pinst,
int valid;
int err;
valid = padata_validate_cpumask(pinst, pcpumask);
valid = padata_validate_cpumask(pinst, pcpumask, -1);
if (!valid) {
__padata_stop(pinst);
goto out_replace;
}
valid = padata_validate_cpumask(pinst, cbcpumask);
valid = padata_validate_cpumask(pinst, cbcpumask, -1);
if (!valid)
__padata_stop(pinst);
@@ -678,7 +695,7 @@ out_replace:
cpumask_copy(pinst->cpumask.pcpu, pcpumask);
cpumask_copy(pinst->cpumask.cbcpu, cbcpumask);
err = padata_setup_cpumasks(pinst) ?: padata_replace(pinst);
err = padata_setup_cpumasks(pinst) ?: padata_replace(pinst, -1);
if (valid)
__padata_start(pinst);
@@ -730,26 +747,6 @@ EXPORT_SYMBOL(padata_set_cpumask);
#ifdef CONFIG_HOTPLUG_CPU
static int __padata_add_cpu(struct padata_instance *pinst, int cpu)
{
int err = padata_replace(pinst);
if (padata_validate_cpumask(pinst, pinst->cpumask.pcpu) &&
padata_validate_cpumask(pinst, pinst->cpumask.cbcpu))
__padata_start(pinst);
return err;
}
static int __padata_remove_cpu(struct padata_instance *pinst, int cpu)
{
if (!padata_validate_cpumask(pinst, pinst->cpumask.pcpu) ||
!padata_validate_cpumask(pinst, pinst->cpumask.cbcpu))
__padata_stop(pinst);
return padata_replace(pinst);
}
static inline int pinst_has_cpu(struct padata_instance *pinst, int cpu)
{
return cpumask_test_cpu(cpu, pinst->cpumask.pcpu) ||
@@ -761,27 +758,39 @@ static int padata_cpu_online(unsigned int cpu, struct hlist_node *node)
struct padata_instance *pinst;
int ret;
pinst = hlist_entry_safe(node, struct padata_instance, cpu_online_node);
pinst = hlist_entry_safe(node, struct padata_instance, cpuhp_node);
if (!pinst_has_cpu(pinst, cpu))
return 0;
mutex_lock(&pinst->lock);
ret = __padata_add_cpu(pinst, cpu);
ret = padata_replace(pinst, -1);
if (padata_validate_cpumask(pinst, pinst->cpumask.pcpu, -1) &&
padata_validate_cpumask(pinst, pinst->cpumask.cbcpu, -1))
__padata_start(pinst);
mutex_unlock(&pinst->lock);
return ret;
}
static int padata_cpu_dead(unsigned int cpu, struct hlist_node *node)
static int padata_cpu_offline(unsigned int cpu, struct hlist_node *node)
{
struct padata_instance *pinst;
int ret;
pinst = hlist_entry_safe(node, struct padata_instance, cpu_dead_node);
pinst = hlist_entry_safe(node, struct padata_instance, cpuhp_node);
if (!pinst_has_cpu(pinst, cpu))
return 0;
mutex_lock(&pinst->lock);
ret = __padata_remove_cpu(pinst, cpu);
if (!padata_validate_cpumask(pinst, pinst->cpumask.pcpu, cpu) ||
!padata_validate_cpumask(pinst, pinst->cpumask.cbcpu, cpu))
__padata_stop(pinst);
ret = padata_replace(pinst, cpu);
mutex_unlock(&pinst->lock);
return ret;
}
@@ -792,15 +801,14 @@ static enum cpuhp_state hp_online;
static void __padata_free(struct padata_instance *pinst)
{
#ifdef CONFIG_HOTPLUG_CPU
cpuhp_state_remove_instance_nocalls(CPUHP_PADATA_DEAD,
&pinst->cpu_dead_node);
cpuhp_state_remove_instance_nocalls(hp_online, &pinst->cpu_online_node);
cpuhp_state_remove_instance_nocalls(hp_online, &pinst->cpuhp_node);
#endif
WARN_ON(!list_empty(&pinst->pslist));
free_cpumask_var(pinst->cpumask.pcpu);
free_cpumask_var(pinst->cpumask.cbcpu);
free_cpumask_var(pinst->validate_cpumask);
destroy_workqueue(pinst->serial_wq);
destroy_workqueue(pinst->parallel_wq);
kfree(pinst);
@@ -961,10 +969,10 @@ struct padata_instance *padata_alloc(const char *name)
if (!alloc_cpumask_var(&pinst->cpumask.pcpu, GFP_KERNEL))
goto err_free_serial_wq;
if (!alloc_cpumask_var(&pinst->cpumask.cbcpu, GFP_KERNEL)) {
free_cpumask_var(pinst->cpumask.pcpu);
goto err_free_serial_wq;
}
if (!alloc_cpumask_var(&pinst->cpumask.cbcpu, GFP_KERNEL))
goto err_free_p_mask;
if (!alloc_cpumask_var(&pinst->validate_cpumask, GFP_KERNEL))
goto err_free_cb_mask;
INIT_LIST_HEAD(&pinst->pslist);
@@ -972,7 +980,7 @@ struct padata_instance *padata_alloc(const char *name)
cpumask_copy(pinst->cpumask.cbcpu, cpu_possible_mask);
if (padata_setup_cpumasks(pinst))
goto err_free_masks;
goto err_free_v_mask;
__padata_start(pinst);
@@ -981,18 +989,19 @@ struct padata_instance *padata_alloc(const char *name)
#ifdef CONFIG_HOTPLUG_CPU
cpuhp_state_add_instance_nocalls_cpuslocked(hp_online,
&pinst->cpu_online_node);
cpuhp_state_add_instance_nocalls_cpuslocked(CPUHP_PADATA_DEAD,
&pinst->cpu_dead_node);
&pinst->cpuhp_node);
#endif
cpus_read_unlock();
return pinst;
err_free_masks:
free_cpumask_var(pinst->cpumask.pcpu);
err_free_v_mask:
free_cpumask_var(pinst->validate_cpumask);
err_free_cb_mask:
free_cpumask_var(pinst->cpumask.cbcpu);
err_free_p_mask:
free_cpumask_var(pinst->cpumask.pcpu);
err_free_serial_wq:
destroy_workqueue(pinst->serial_wq);
err_put_cpus:
@@ -1035,7 +1044,7 @@ struct padata_shell *padata_alloc_shell(struct padata_instance *pinst)
ps->pinst = pinst;
cpus_read_lock();
pd = padata_alloc_pd(ps);
pd = padata_alloc_pd(ps, -1);
cpus_read_unlock();
if (!pd)
@@ -1084,31 +1093,24 @@ void __init padata_init(void)
int ret;
ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "padata:online",
padata_cpu_online, NULL);
padata_cpu_online, padata_cpu_offline);
if (ret < 0)
goto err;
hp_online = ret;
ret = cpuhp_setup_state_multi(CPUHP_PADATA_DEAD, "padata:dead",
NULL, padata_cpu_dead);
if (ret < 0)
goto remove_online_state;
#endif
possible_cpus = num_possible_cpus();
padata_works = kmalloc_objs(struct padata_work, possible_cpus);
if (!padata_works)
goto remove_dead_state;
goto remove_online_state;
for (i = 0; i < possible_cpus; ++i)
list_add(&padata_works[i].pw_list, &padata_free_works);
return;
remove_dead_state:
#ifdef CONFIG_HOTPLUG_CPU
cpuhp_remove_multi_state(CPUHP_PADATA_DEAD);
remove_online_state:
#ifdef CONFIG_HOTPLUG_CPU
cpuhp_remove_multi_state(hp_online);
err:
#endif