Including fixes from CAN

Current release - regressions:
 
   - udp: do not use skb_release_head_state() before skb_attempt_defer_free()
 
   - gro_cells: use nested-BH locking for gro_cell
 
   - dpll: zl3073x: increase maximum size of flash utility
 
 Previous releases - regressions:
 
   - core: fix lockdep splat on device unregister
 
   - tcp: fix tcp_tso_should_defer() vs large RTT
 
   - tls:
     - don't rely on tx_work during send()
     - wait for pending async decryptions if tls_strp_msg_hold fails
 
   - can: j1939: add missing calls in NETDEV_UNREGISTER notification handler
 
   - eth: lan78xx: fix lost EEPROM write timeout in lan78xx_write_raw_eeprom
 
 Previous releases - always broken:
 
   - ip6_tunnel: prevent perpetual tunnel growth
 
   - dpll: zl3073x: handle missing or corrupted flash configuration
 
   - can: m_can: fix pm_runtime and CAN state handling
 
   - eth:ixgbe: fix too early devlink_free() in ixgbe_remove()
 
   - eth: ixgbevf: fix mailbox API compatibility
 
   - eth: gve: Check valid ts bit on RX descriptor before hw timestamping
 
   - eth: idpf: cleanup remaining SKBs in PTP flows
 
   - eth: r8169: fix packet truncation after S4 resume on RTL8168H/RTL8111H
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmjxBUASHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkTfwP/0I5aKZHbZ9I3gHL6Dw9TtcYPsP8cZDK
 Iu7nOtgyD8AKSnX+3ynABg4K3WbE6HFi1BQzv63uO9G2UYLZSMdU16uytUKbstR6
 HWw+r2N0NMb+qacb++nbMugmYczSWUYKRyrpWHrNtqbKi3aQRJt5lQ81AGwgMITX
 mx8VILjMTXIt/K055LVnF9z2c4ltFmLTGiODKqHjQQirNbBiRJFNYn4qtw3BkbdS
 OwxNtTqOozQppRf+1l4MRHHXNbwGe7W9hOnALbiHpwtXxvG/+NVRu8IkNcD8vMD9
 I7aThpkOt7B9xcWqLb2pAAk976MuKQr6BUGKQrP26vZb9W0NIJbew1Y8d74Zld5d
 Z9bD5Uqr85Ge4Xp3/mWnaQZWk2OTsVdOOxkUoPWilFByULGfJORMvu6+y6/RWpZt
 dezE7WL9o+K1AJLBWnZ3RTWI8vmSEDup4WzPSP1v7paAFcvb9Ub4pBghX9+RGCfE
 ZmCecQPZfVzTe08PvAysc9VR7o3jGx5FpCLhinLxaHaSrV4YvALyV8iYagrmtQWY
 iWVkMYEIxh5B6KLmIVVu5cQJJjuRLm6k+nCTemBJaAItu2YLdy4ZeLeFWUhVgmxM
 SIsueBmTGorRqCbSed7L17yMApsMaBzE775+qOFoK7tLoKVTm22AMA3DzQtKGeIv
 OFlNreoOWzMX
 =jVlx
 -----END PGP SIGNATURE-----

Merge tag 'net-6.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from CAN

  Current release - regressions:

    - udp: do not use skb_release_head_state() before
      skb_attempt_defer_free()

    - gro_cells: use nested-BH locking for gro_cell

    - dpll: zl3073x: increase maximum size of flash utility

  Previous releases - regressions:

    - core: fix lockdep splat on device unregister

    - tcp: fix tcp_tso_should_defer() vs large RTT

    - tls:
        - don't rely on tx_work during send()
        - wait for pending async decryptions if tls_strp_msg_hold fails

    - can: j1939: add missing calls in NETDEV_UNREGISTER notification
      handler

    - eth: lan78xx: fix lost EEPROM write timeout in
      lan78xx_write_raw_eeprom

  Previous releases - always broken:

    - ip6_tunnel: prevent perpetual tunnel growth

    - dpll: zl3073x: handle missing or corrupted flash configuration

    - can: m_can: fix pm_runtime and CAN state handling

    - eth:
        - ixgbe: fix too early devlink_free() in ixgbe_remove()
        - ixgbevf: fix mailbox API compatibility
        - gve: Check valid ts bit on RX descriptor before hw timestamping
        - idpf: cleanup remaining SKBs in PTP flows
        - r8169: fix packet truncation after S4 resume on RTL8168H/RTL8111H"

* tag 'net-6.18-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (50 commits)
  udp: do not use skb_release_head_state() before skb_attempt_defer_free()
  net: usb: lan78xx: fix use of improperly initialized dev->chipid in lan78xx_reset
  netdevsim: set the carrier when the device goes up
  selftests: tls: add test for short splice due to full skmsg
  selftests: net: tls: add tests for cmsg vs MSG_MORE
  tls: don't rely on tx_work during send()
  tls: wait for pending async decryptions if tls_strp_msg_hold fails
  tls: always set record_type in tls_process_cmsg
  tls: wait for async encrypt in case of error during latter iterations of sendmsg
  tls: trim encrypted message to match the plaintext on short splice
  tg3: prevent use of uninitialized remote_adv and local_adv variables
  MAINTAINERS: new entry for IPv6 IOAM
  gve: Check valid ts bit on RX descriptor before hw timestamping
  net: core: fix lockdep splat on device unregister
  MAINTAINERS: add myself as maintainer for b53
  selftests: net: check jq command is supported
  net: airoha: Take into account out-of-order tx completions in airoha_dev_xmit()
  tcp: fix tcp_tso_should_defer() vs large RTT
  r8152: add error handling in rtl8152_driver_init
  usbnet: Fix using smp_processor_id() in preemptible code warnings
  ...
This commit is contained in:
Linus Torvalds 2025-10-16 09:41:21 -07:00
commit 634ec1fc79
57 changed files with 817 additions and 182 deletions

View File

@ -227,6 +227,7 @@ Dmitry Safonov <0x7f454c46@gmail.com> <dima@arista.com>
Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com> Dmitry Safonov <0x7f454c46@gmail.com> <d.safonov@partner.samsung.com>
Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com> Dmitry Safonov <0x7f454c46@gmail.com> <dsafonov@virtuozzo.com>
Domen Puncer <domen@coderock.org> Domen Puncer <domen@coderock.org>
Dong Aisheng <aisheng.dong@nxp.com> <b29396@freescale.com>
Douglas Gilbert <dougg@torque.net> Douglas Gilbert <dougg@torque.net>
Drew Fustini <fustini@kernel.org> <drew@pdp7.com> Drew Fustini <fustini@kernel.org> <drew@pdp7.com>
<duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr> <duje@dujemihanovic.xyz> <duje.mihanovic@skole.hr>

View File

@ -1398,10 +1398,9 @@ second bit timing has to be specified in order to enable the CAN FD bitrate.
Additionally CAN FD capable CAN controllers support up to 64 bytes of Additionally CAN FD capable CAN controllers support up to 64 bytes of
payload. The representation of this length in can_frame.len and payload. The representation of this length in can_frame.len and
canfd_frame.len for userspace applications and inside the Linux network canfd_frame.len for userspace applications and inside the Linux network
layer is a plain value from 0 .. 64 instead of the CAN 'data length code'. layer is a plain value from 0 .. 64 instead of the Classical CAN length
The data length code was a 1:1 mapping to the payload length in the Classical which ranges from 0 to 8. The payload length to the bus-relevant DLC mapping
CAN frames anyway. The payload length to the bus-relevant DLC mapping is is only performed inside the CAN drivers, preferably with the helper
only performed inside the CAN drivers, preferably with the helper
functions can_fd_dlc2len() and can_fd_len2dlc(). functions can_fd_dlc2len() and can_fd_len2dlc().
The CAN netdevice driver capabilities can be distinguished by the network The CAN netdevice driver capabilities can be distinguished by the network
@ -1465,6 +1464,70 @@ Example when 'fd-non-iso on' is added on this switchable CAN FD adapter::
can <FD,FD-NON-ISO> state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 0 can <FD,FD-NON-ISO> state ERROR-ACTIVE (berr-counter tx 0 rx 0) restart-ms 0
Transmitter Delay Compensation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
At high bit rates, the propagation delay from the TX pin to the RX pin of
the transceiver might become greater than the actual bit time causing
measurement errors: the RX pin would still be measuring the previous bit.
The Transmitter Delay Compensation (thereafter, TDC) resolves this problem
by introducing a Secondary Sample Point (SSP) equal to the distance, in
minimum time quantum, from the start of the bit time on the TX pin to the
actual measurement on the RX pin. The SSP is calculated as the sum of two
configurable values: the TDC Value (TDCV) and the TDC offset (TDCO).
TDC, if supported by the device, can be configured together with CAN-FD
using the ip tool's "tdc-mode" argument as follow:
**omitted**
When no "tdc-mode" option is provided, the kernel will automatically
decide whether TDC should be turned on, in which case it will
calculate a default TDCO and use the TDCV as measured by the
device. This is the recommended method to use TDC.
**"tdc-mode off"**
TDC is explicitly disabled.
**"tdc-mode auto"**
The user must provide the "tdco" argument. The TDCV will be
automatically calculated by the device. This option is only
available if the device supports the TDC-AUTO CAN controller mode.
**"tdc-mode manual"**
The user must provide both the "tdco" and "tdcv" arguments. This
option is only available if the device supports the TDC-MANUAL CAN
controller mode.
Note that some devices may offer an additional parameter: "tdcf" (TDC Filter
window). If supported by your device, this can be added as an optional
argument to either "tdc-mode auto" or "tdc-mode manual".
Example configuring a 500 kbit/s arbitration bitrate, a 5 Mbit/s data
bitrate, a TDCO of 15 minimum time quantum and a TDCV automatically measured
by the device::
$ ip link set can0 up type can bitrate 500000 \
fd on dbitrate 4000000 \
tdc-mode auto tdco 15
$ ip -details link show can0
5: can0: <NOARP,UP,LOWER_UP,ECHO> mtu 72 qdisc pfifo_fast state UP \
mode DEFAULT group default qlen 10
link/can promiscuity 0 allmulti 0 minmtu 72 maxmtu 72
can <FD,TDC-AUTO> state ERROR-ACTIVE restart-ms 0
bitrate 500000 sample-point 0.875
tq 12 prop-seg 69 phase-seg1 70 phase-seg2 20 sjw 10 brp 1
ES582.1/ES584.1: tseg1 2..256 tseg2 2..128 sjw 1..128 brp 1..512 \
brp_inc 1
dbitrate 4000000 dsample-point 0.750
dtq 12 dprop-seg 7 dphase-seg1 7 dphase-seg2 5 dsjw 2 dbrp 1
tdco 15 tdcf 0
ES582.1/ES584.1: dtseg1 2..32 dtseg2 1..16 dsjw 1..8 dbrp 1..32 \
dbrp_inc 1
tdco 0..127 tdcf 0..127
clock 80000000
Supported CAN Hardware Supported CAN Hardware
---------------------- ----------------------

View File

@ -25,6 +25,9 @@ seg6_require_hmac - INTEGER
Default is 0. Default is 0.
/proc/sys/net/ipv6/seg6_* variables:
====================================
seg6_flowlabel - INTEGER seg6_flowlabel - INTEGER
Controls the behaviour of computing the flowlabel of outer Controls the behaviour of computing the flowlabel of outer
IPv6 header in case of SR T.encaps IPv6 header in case of SR T.encaps

View File

@ -4804,6 +4804,7 @@ F: drivers/net/ethernet/broadcom/b44.*
BROADCOM B53/SF2 ETHERNET SWITCH DRIVER BROADCOM B53/SF2 ETHERNET SWITCH DRIVER
M: Florian Fainelli <florian.fainelli@broadcom.com> M: Florian Fainelli <florian.fainelli@broadcom.com>
M: Jonas Gorski <jonas.gorski@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: openwrt-devel@lists.openwrt.org (subscribers-only) L: openwrt-devel@lists.openwrt.org (subscribers-only)
S: Supported S: Supported
@ -18013,6 +18014,16 @@ X: net/rfkill/
X: net/wireless/ X: net/wireless/
X: tools/testing/selftests/net/can/ X: tools/testing/selftests/net/can/
NETWORKING [IOAM]
M: Justin Iurman <justin.iurman@uliege.be>
S: Maintained
F: Documentation/networking/ioam6*
F: include/linux/ioam6*
F: include/net/ioam6*
F: include/uapi/linux/ioam6*
F: net/ipv6/ioam6*
F: tools/testing/selftests/net/ioam6*
NETWORKING [IPSEC] NETWORKING [IPSEC]
M: Steffen Klassert <steffen.klassert@secunet.com> M: Steffen Klassert <steffen.klassert@secunet.com>
M: Herbert Xu <herbert@gondor.apana.org.au> M: Herbert Xu <herbert@gondor.apana.org.au>

View File

@ -1038,8 +1038,29 @@ zl3073x_dev_phase_meas_setup(struct zl3073x_dev *zldev)
int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full) int zl3073x_dev_start(struct zl3073x_dev *zldev, bool full)
{ {
struct zl3073x_dpll *zldpll; struct zl3073x_dpll *zldpll;
u8 info;
int rc; int rc;
rc = zl3073x_read_u8(zldev, ZL_REG_INFO, &info);
if (rc) {
dev_err(zldev->dev, "Failed to read device status info\n");
return rc;
}
if (!FIELD_GET(ZL_INFO_READY, info)) {
/* The ready bit indicates that the firmware was successfully
* configured and is ready for normal operation. If it is
* cleared then the configuration stored in flash is wrong
* or missing. In this situation the driver will expose
* only devlink interface to give an opportunity to flash
* the correct config.
*/
dev_info(zldev->dev,
"FW not fully ready - missing or corrupted config\n");
return 0;
}
if (full) { if (full) {
/* Fetch device state */ /* Fetch device state */
rc = zl3073x_dev_state_fetch(zldev); rc = zl3073x_dev_state_fetch(zldev);

View File

@ -37,7 +37,7 @@ struct zl3073x_fw_component_info {
static const struct zl3073x_fw_component_info component_info[] = { static const struct zl3073x_fw_component_info component_info[] = {
[ZL_FW_COMPONENT_UTIL] = { [ZL_FW_COMPONENT_UTIL] = {
.name = "utility", .name = "utility",
.max_size = 0x2300, .max_size = 0x4000,
.load_addr = 0x20000000, .load_addr = 0x20000000,
.flash_type = ZL3073X_FLASH_TYPE_NONE, .flash_type = ZL3073X_FLASH_TYPE_NONE,
}, },

View File

@ -67,6 +67,9 @@
* Register Page 0, General * Register Page 0, General
**************************/ **************************/
#define ZL_REG_INFO ZL_REG(0, 0x00, 1)
#define ZL_INFO_READY BIT(7)
#define ZL_REG_ID ZL_REG(0, 0x01, 2) #define ZL_REG_ID ZL_REG(0, 0x01, 2)
#define ZL_REG_REVISION ZL_REG(0, 0x03, 2) #define ZL_REG_REVISION ZL_REG(0, 0x03, 2)
#define ZL_REG_FW_VER ZL_REG(0, 0x05, 2) #define ZL_REG_FW_VER ZL_REG(0, 0x05, 2)

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
// CAN bus driver for Bosch M_CAN controller // CAN bus driver for Bosch M_CAN controller
// Copyright (C) 2014 Freescale Semiconductor, Inc. // Copyright (C) 2014 Freescale Semiconductor, Inc.
// Dong Aisheng <b29396@freescale.com> // Dong Aisheng <aisheng.dong@nxp.com>
// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/ // Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/
/* Bosch M_CAN user manual can be obtained from: /* Bosch M_CAN user manual can be obtained from:
@ -812,6 +812,9 @@ static int m_can_handle_state_change(struct net_device *dev,
u32 timestamp = 0; u32 timestamp = 0;
switch (new_state) { switch (new_state) {
case CAN_STATE_ERROR_ACTIVE:
cdev->can.state = CAN_STATE_ERROR_ACTIVE;
break;
case CAN_STATE_ERROR_WARNING: case CAN_STATE_ERROR_WARNING:
/* error warning state */ /* error warning state */
cdev->can.can_stats.error_warning++; cdev->can.can_stats.error_warning++;
@ -841,6 +844,12 @@ static int m_can_handle_state_change(struct net_device *dev,
__m_can_get_berr_counter(dev, &bec); __m_can_get_berr_counter(dev, &bec);
switch (new_state) { switch (new_state) {
case CAN_STATE_ERROR_ACTIVE:
cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT;
cf->data[1] = CAN_ERR_CRTL_ACTIVE;
cf->data[6] = bec.txerr;
cf->data[7] = bec.rxerr;
break;
case CAN_STATE_ERROR_WARNING: case CAN_STATE_ERROR_WARNING:
/* error warning state */ /* error warning state */
cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT; cf->can_id |= CAN_ERR_CRTL | CAN_ERR_CNT;
@ -877,30 +886,33 @@ static int m_can_handle_state_change(struct net_device *dev,
return 1; return 1;
} }
static int m_can_handle_state_errors(struct net_device *dev, u32 psr) static enum can_state
m_can_state_get_by_psr(struct m_can_classdev *cdev)
{
u32 reg_psr;
reg_psr = m_can_read(cdev, M_CAN_PSR);
if (reg_psr & PSR_BO)
return CAN_STATE_BUS_OFF;
if (reg_psr & PSR_EP)
return CAN_STATE_ERROR_PASSIVE;
if (reg_psr & PSR_EW)
return CAN_STATE_ERROR_WARNING;
return CAN_STATE_ERROR_ACTIVE;
}
static int m_can_handle_state_errors(struct net_device *dev)
{ {
struct m_can_classdev *cdev = netdev_priv(dev); struct m_can_classdev *cdev = netdev_priv(dev);
int work_done = 0; enum can_state new_state;
if (psr & PSR_EW && cdev->can.state != CAN_STATE_ERROR_WARNING) { new_state = m_can_state_get_by_psr(cdev);
netdev_dbg(dev, "entered error warning state\n"); if (new_state == cdev->can.state)
work_done += m_can_handle_state_change(dev, return 0;
CAN_STATE_ERROR_WARNING);
}
if (psr & PSR_EP && cdev->can.state != CAN_STATE_ERROR_PASSIVE) { return m_can_handle_state_change(dev, new_state);
netdev_dbg(dev, "entered error passive state\n");
work_done += m_can_handle_state_change(dev,
CAN_STATE_ERROR_PASSIVE);
}
if (psr & PSR_BO && cdev->can.state != CAN_STATE_BUS_OFF) {
netdev_dbg(dev, "entered error bus off state\n");
work_done += m_can_handle_state_change(dev,
CAN_STATE_BUS_OFF);
}
return work_done;
} }
static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus) static void m_can_handle_other_err(struct net_device *dev, u32 irqstatus)
@ -1031,8 +1043,7 @@ static int m_can_rx_handler(struct net_device *dev, int quota, u32 irqstatus)
} }
if (irqstatus & IR_ERR_STATE) if (irqstatus & IR_ERR_STATE)
work_done += m_can_handle_state_errors(dev, work_done += m_can_handle_state_errors(dev);
m_can_read(cdev, M_CAN_PSR));
if (irqstatus & IR_ERR_BUS_30X) if (irqstatus & IR_ERR_BUS_30X)
work_done += m_can_handle_bus_errors(dev, irqstatus, work_done += m_can_handle_bus_errors(dev, irqstatus,
@ -1606,7 +1617,7 @@ static int m_can_start(struct net_device *dev)
netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0), netdev_queue_set_dql_min_limit(netdev_get_tx_queue(cdev->net, 0),
cdev->tx_max_coalesced_frames); cdev->tx_max_coalesced_frames);
cdev->can.state = CAN_STATE_ERROR_ACTIVE; cdev->can.state = m_can_state_get_by_psr(cdev);
m_can_enable_all_interrupts(cdev); m_can_enable_all_interrupts(cdev);
@ -2492,12 +2503,11 @@ int m_can_class_suspend(struct device *dev)
} }
m_can_clk_stop(cdev); m_can_clk_stop(cdev);
cdev->can.state = CAN_STATE_SLEEPING;
} }
pinctrl_pm_select_sleep_state(dev); pinctrl_pm_select_sleep_state(dev);
cdev->can.state = CAN_STATE_SLEEPING;
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(m_can_class_suspend); EXPORT_SYMBOL_GPL(m_can_class_suspend);
@ -2510,8 +2520,6 @@ int m_can_class_resume(struct device *dev)
pinctrl_pm_select_default_state(dev); pinctrl_pm_select_default_state(dev);
cdev->can.state = CAN_STATE_ERROR_ACTIVE;
if (netif_running(ndev)) { if (netif_running(ndev)) {
ret = m_can_clk_start(cdev); ret = m_can_clk_start(cdev);
if (ret) if (ret)
@ -2529,6 +2537,8 @@ int m_can_class_resume(struct device *dev)
if (cdev->ops->init) if (cdev->ops->init)
ret = cdev->ops->init(cdev); ret = cdev->ops->init(cdev);
cdev->can.state = m_can_state_get_by_psr(cdev);
m_can_write(cdev, M_CAN_IE, cdev->active_interrupts); m_can_write(cdev, M_CAN_IE, cdev->active_interrupts);
} else { } else {
ret = m_can_start(ndev); ret = m_can_start(ndev);
@ -2546,7 +2556,7 @@ int m_can_class_resume(struct device *dev)
} }
EXPORT_SYMBOL_GPL(m_can_class_resume); EXPORT_SYMBOL_GPL(m_can_class_resume);
MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>"); MODULE_AUTHOR("Dong Aisheng <aisheng.dong@nxp.com>");
MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>"); MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("CAN bus driver for Bosch M_CAN controller"); MODULE_DESCRIPTION("CAN bus driver for Bosch M_CAN controller");

View File

@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
// IOMapped CAN bus driver for Bosch M_CAN controller // IOMapped CAN bus driver for Bosch M_CAN controller
// Copyright (C) 2014 Freescale Semiconductor, Inc. // Copyright (C) 2014 Freescale Semiconductor, Inc.
// Dong Aisheng <b29396@freescale.com> // Dong Aisheng <aisheng.dong@nxp.com>
// //
// Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/ // Copyright (C) 2018-19 Texas Instruments Incorporated - http://www.ti.com/
@ -180,7 +180,7 @@ static void m_can_plat_remove(struct platform_device *pdev)
struct m_can_classdev *mcan_class = &priv->cdev; struct m_can_classdev *mcan_class = &priv->cdev;
m_can_class_unregister(mcan_class); m_can_class_unregister(mcan_class);
pm_runtime_disable(mcan_class->dev);
m_can_class_free_dev(mcan_class->net); m_can_class_free_dev(mcan_class->net);
} }
@ -236,7 +236,7 @@ static struct platform_driver m_can_plat_driver = {
module_platform_driver(m_can_plat_driver); module_platform_driver(m_can_plat_driver);
MODULE_AUTHOR("Dong Aisheng <b29396@freescale.com>"); MODULE_AUTHOR("Dong Aisheng <aisheng.dong@nxp.com>");
MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>"); MODULE_AUTHOR("Dan Murphy <dmurphy@ti.com>");
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("M_CAN driver for IO Mapped Bosch controllers"); MODULE_DESCRIPTION("M_CAN driver for IO Mapped Bosch controllers");

View File

@ -289,11 +289,6 @@ struct gs_host_frame {
#define GS_MAX_RX_URBS 30 #define GS_MAX_RX_URBS 30
#define GS_NAPI_WEIGHT 32 #define GS_NAPI_WEIGHT 32
/* Maximum number of interfaces the driver supports per device.
* Current hardware only supports 3 interfaces. The future may vary.
*/
#define GS_MAX_INTF 3
struct gs_tx_context { struct gs_tx_context {
struct gs_can *dev; struct gs_can *dev;
unsigned int echo_id; unsigned int echo_id;
@ -324,7 +319,6 @@ struct gs_can {
/* usb interface struct */ /* usb interface struct */
struct gs_usb { struct gs_usb {
struct gs_can *canch[GS_MAX_INTF];
struct usb_anchor rx_submitted; struct usb_anchor rx_submitted;
struct usb_device *udev; struct usb_device *udev;
@ -336,9 +330,11 @@ struct gs_usb {
unsigned int hf_size_rx; unsigned int hf_size_rx;
u8 active_channels; u8 active_channels;
u8 channel_cnt;
unsigned int pipe_in; unsigned int pipe_in;
unsigned int pipe_out; unsigned int pipe_out;
struct gs_can *canch[] __counted_by(channel_cnt);
}; };
/* 'allocate' a tx context. /* 'allocate' a tx context.
@ -599,7 +595,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
} }
/* device reports out of range channel id */ /* device reports out of range channel id */
if (hf->channel >= GS_MAX_INTF) if (hf->channel >= parent->channel_cnt)
goto device_detach; goto device_detach;
dev = parent->canch[hf->channel]; dev = parent->canch[hf->channel];
@ -699,7 +695,7 @@ resubmit_urb:
/* USB failure take down all interfaces */ /* USB failure take down all interfaces */
if (rc == -ENODEV) { if (rc == -ENODEV) {
device_detach: device_detach:
for (rc = 0; rc < GS_MAX_INTF; rc++) { for (rc = 0; rc < parent->channel_cnt; rc++) {
if (parent->canch[rc]) if (parent->canch[rc])
netif_device_detach(parent->canch[rc]->netdev); netif_device_detach(parent->canch[rc]->netdev);
} }
@ -1249,6 +1245,7 @@ static struct gs_can *gs_make_candev(unsigned int channel,
netdev->flags |= IFF_ECHO; /* we support full roundtrip echo */ netdev->flags |= IFF_ECHO; /* we support full roundtrip echo */
netdev->dev_id = channel; netdev->dev_id = channel;
netdev->dev_port = channel;
/* dev setup */ /* dev setup */
strcpy(dev->bt_const.name, KBUILD_MODNAME); strcpy(dev->bt_const.name, KBUILD_MODNAME);
@ -1460,17 +1457,19 @@ static int gs_usb_probe(struct usb_interface *intf,
icount = dconf.icount + 1; icount = dconf.icount + 1;
dev_info(&intf->dev, "Configuring for %u interfaces\n", icount); dev_info(&intf->dev, "Configuring for %u interfaces\n", icount);
if (icount > GS_MAX_INTF) { if (icount > type_max(parent->channel_cnt)) {
dev_err(&intf->dev, dev_err(&intf->dev,
"Driver cannot handle more that %u CAN interfaces\n", "Driver cannot handle more that %u CAN interfaces\n",
GS_MAX_INTF); type_max(parent->channel_cnt));
return -EINVAL; return -EINVAL;
} }
parent = kzalloc(sizeof(*parent), GFP_KERNEL); parent = kzalloc(struct_size(parent, canch, icount), GFP_KERNEL);
if (!parent) if (!parent)
return -ENOMEM; return -ENOMEM;
parent->channel_cnt = icount;
init_usb_anchor(&parent->rx_submitted); init_usb_anchor(&parent->rx_submitted);
usb_set_intfdata(intf, parent); usb_set_intfdata(intf, parent);
@ -1531,7 +1530,7 @@ static void gs_usb_disconnect(struct usb_interface *intf)
return; return;
} }
for (i = 0; i < GS_MAX_INTF; i++) for (i = 0; i < parent->channel_cnt; i++)
if (parent->canch[i]) if (parent->canch[i])
gs_destroy_candev(parent->canch[i]); gs_destroy_candev(parent->canch[i]);

View File

@ -1873,6 +1873,20 @@ static u32 airoha_get_dsa_tag(struct sk_buff *skb, struct net_device *dev)
#endif #endif
} }
static bool airoha_dev_tx_queue_busy(struct airoha_queue *q, u32 nr_frags)
{
u32 tail = q->tail <= q->head ? q->tail + q->ndesc : q->tail;
u32 index = q->head + nr_frags;
/* completion napi can free out-of-order tx descriptors if hw QoS is
* enabled and packets with different priorities are queued to the same
* DMA ring. Take into account possible out-of-order reports checking
* if the tx queue is full using circular buffer head/tail pointers
* instead of the number of queued packets.
*/
return index >= tail;
}
static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb, static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,
struct net_device *dev) struct net_device *dev)
{ {
@ -1926,7 +1940,7 @@ static netdev_tx_t airoha_dev_xmit(struct sk_buff *skb,
txq = netdev_get_tx_queue(dev, qid); txq = netdev_get_tx_queue(dev, qid);
nr_frags = 1 + skb_shinfo(skb)->nr_frags; nr_frags = 1 + skb_shinfo(skb)->nr_frags;
if (q->queued + nr_frags > q->ndesc) { if (airoha_dev_tx_queue_busy(q, nr_frags)) {
/* not enough space in the queue */ /* not enough space in the queue */
netif_tx_stop_queue(txq); netif_tx_stop_queue(txq);
spin_unlock_bh(&q->lock); spin_unlock_bh(&q->lock);

View File

@ -1080,7 +1080,6 @@ static void xgbe_free_rx_data(struct xgbe_prv_data *pdata)
static int xgbe_phy_reset(struct xgbe_prv_data *pdata) static int xgbe_phy_reset(struct xgbe_prv_data *pdata)
{ {
pdata->phy_link = -1;
pdata->phy_speed = SPEED_UNKNOWN; pdata->phy_speed = SPEED_UNKNOWN;
return pdata->phy_if.phy_reset(pdata); return pdata->phy_if.phy_reset(pdata);

View File

@ -1555,6 +1555,7 @@ static int xgbe_phy_init(struct xgbe_prv_data *pdata)
pdata->phy.duplex = DUPLEX_FULL; pdata->phy.duplex = DUPLEX_FULL;
} }
pdata->phy_link = 0;
pdata->phy.link = 0; pdata->phy.link = 0;
pdata->phy.pause_autoneg = pdata->pause_autoneg; pdata->phy.pause_autoneg = pdata->pause_autoneg;

View File

@ -5803,7 +5803,7 @@ static int tg3_setup_fiber_mii_phy(struct tg3 *tp, bool force_reset)
u32 current_speed = SPEED_UNKNOWN; u32 current_speed = SPEED_UNKNOWN;
u8 current_duplex = DUPLEX_UNKNOWN; u8 current_duplex = DUPLEX_UNKNOWN;
bool current_link_up = false; bool current_link_up = false;
u32 local_adv, remote_adv, sgsr; u32 local_adv = 0, remote_adv = 0, sgsr;
if ((tg3_asic_rev(tp) == ASIC_REV_5719 || if ((tg3_asic_rev(tp) == ASIC_REV_5719 ||
tg3_asic_rev(tp) == ASIC_REV_5720) && tg3_asic_rev(tp) == ASIC_REV_5720) &&
@ -5944,9 +5944,6 @@ static int tg3_setup_fiber_mii_phy(struct tg3 *tp, bool force_reset)
else else
current_duplex = DUPLEX_HALF; current_duplex = DUPLEX_HALF;
local_adv = 0;
remote_adv = 0;
if (bmcr & BMCR_ANENABLE) { if (bmcr & BMCR_ANENABLE) {
u32 common; u32 common;

View File

@ -508,25 +508,34 @@ static int alloc_list(struct net_device *dev)
for (i = 0; i < RX_RING_SIZE; i++) { for (i = 0; i < RX_RING_SIZE; i++) {
/* Allocated fixed size of skbuff */ /* Allocated fixed size of skbuff */
struct sk_buff *skb; struct sk_buff *skb;
dma_addr_t addr;
skb = netdev_alloc_skb_ip_align(dev, np->rx_buf_sz); skb = netdev_alloc_skb_ip_align(dev, np->rx_buf_sz);
np->rx_skbuff[i] = skb; np->rx_skbuff[i] = skb;
if (!skb) { if (!skb)
free_list(dev); goto err_free_list;
return -ENOMEM;
} addr = dma_map_single(&np->pdev->dev, skb->data,
np->rx_buf_sz, DMA_FROM_DEVICE);
if (dma_mapping_error(&np->pdev->dev, addr))
goto err_kfree_skb;
np->rx_ring[i].next_desc = cpu_to_le64(np->rx_ring_dma + np->rx_ring[i].next_desc = cpu_to_le64(np->rx_ring_dma +
((i + 1) % RX_RING_SIZE) * ((i + 1) % RX_RING_SIZE) *
sizeof(struct netdev_desc)); sizeof(struct netdev_desc));
/* Rubicon now supports 40 bits of addressing space. */ /* Rubicon now supports 40 bits of addressing space. */
np->rx_ring[i].fraginfo = np->rx_ring[i].fraginfo = cpu_to_le64(addr);
cpu_to_le64(dma_map_single(&np->pdev->dev, skb->data,
np->rx_buf_sz, DMA_FROM_DEVICE));
np->rx_ring[i].fraginfo |= cpu_to_le64((u64)np->rx_buf_sz << 48); np->rx_ring[i].fraginfo |= cpu_to_le64((u64)np->rx_buf_sz << 48);
} }
return 0; return 0;
err_kfree_skb:
dev_kfree_skb(np->rx_skbuff[i]);
np->rx_skbuff[i] = NULL;
err_free_list:
free_list(dev);
return -ENOMEM;
} }
static void rio_hw_init(struct net_device *dev) static void rio_hw_init(struct net_device *dev)

View File

@ -100,6 +100,8 @@
*/ */
#define GVE_DQO_QPL_ONDEMAND_ALLOC_THRESHOLD 96 #define GVE_DQO_QPL_ONDEMAND_ALLOC_THRESHOLD 96
#define GVE_DQO_RX_HWTSTAMP_VALID 0x1
/* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */ /* Each slot in the desc ring has a 1:1 mapping to a slot in the data ring */
struct gve_rx_desc_queue { struct gve_rx_desc_queue {
struct gve_rx_desc *desc_ring; /* the descriptor ring */ struct gve_rx_desc *desc_ring; /* the descriptor ring */

View File

@ -236,7 +236,8 @@ struct gve_rx_compl_desc_dqo {
u8 status_error1; u8 status_error1;
__le16 reserved5; u8 reserved5;
u8 ts_sub_nsecs_low;
__le16 buf_id; /* Buffer ID which was sent on the buffer queue. */ __le16 buf_id; /* Buffer ID which was sent on the buffer queue. */
union { union {

View File

@ -456,14 +456,20 @@ static void gve_rx_skb_hash(struct sk_buff *skb,
* Note that this means if the time delta between packet reception and the last * Note that this means if the time delta between packet reception and the last
* clock read is greater than ~2 seconds, this will provide invalid results. * clock read is greater than ~2 seconds, this will provide invalid results.
*/ */
static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx, u32 hwts) static void gve_rx_skb_hwtstamp(struct gve_rx_ring *rx,
const struct gve_rx_compl_desc_dqo *desc)
{ {
u64 last_read = READ_ONCE(rx->gve->last_sync_nic_counter); u64 last_read = READ_ONCE(rx->gve->last_sync_nic_counter);
struct sk_buff *skb = rx->ctx.skb_head; struct sk_buff *skb = rx->ctx.skb_head;
u32 low = (u32)last_read; u32 ts, low;
s32 diff = hwts - low; s32 diff;
skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff); if (desc->ts_sub_nsecs_low & GVE_DQO_RX_HWTSTAMP_VALID) {
ts = le32_to_cpu(desc->ts);
low = (u32)last_read;
diff = ts - low;
skb_hwtstamps(skb)->hwtstamp = ns_to_ktime(last_read + diff);
}
} }
static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx) static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx)
@ -944,7 +950,7 @@ static int gve_rx_complete_skb(struct gve_rx_ring *rx, struct napi_struct *napi,
gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype); gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype);
if (rx->gve->ts_config.rx_filter == HWTSTAMP_FILTER_ALL) if (rx->gve->ts_config.rx_filter == HWTSTAMP_FILTER_ALL)
gve_rx_skb_hwtstamp(rx, le32_to_cpu(desc->ts)); gve_rx_skb_hwtstamp(rx, desc);
/* RSC packets must set gso_size otherwise the TCP stack will complain /* RSC packets must set gso_size otherwise the TCP stack will complain
* that packets are larger than MTU. * that packets are larger than MTU.

View File

@ -863,6 +863,9 @@ static void idpf_ptp_release_vport_tstamp(struct idpf_vport *vport)
u64_stats_inc(&vport->tstamp_stats.flushed); u64_stats_inc(&vport->tstamp_stats.flushed);
list_del(&ptp_tx_tstamp->list_member); list_del(&ptp_tx_tstamp->list_member);
if (ptp_tx_tstamp->skb)
consume_skb(ptp_tx_tstamp->skb);
kfree(ptp_tx_tstamp); kfree(ptp_tx_tstamp);
} }
u64_stats_update_end(&vport->tstamp_stats.stats_sync); u64_stats_update_end(&vport->tstamp_stats.stats_sync);

View File

@ -517,6 +517,7 @@ idpf_ptp_get_tstamp_value(struct idpf_vport *vport,
shhwtstamps.hwtstamp = ns_to_ktime(tstamp); shhwtstamps.hwtstamp = ns_to_ktime(tstamp);
skb_tstamp_tx(ptp_tx_tstamp->skb, &shhwtstamps); skb_tstamp_tx(ptp_tx_tstamp->skb, &shhwtstamps);
consume_skb(ptp_tx_tstamp->skb); consume_skb(ptp_tx_tstamp->skb);
ptp_tx_tstamp->skb = NULL;
list_add(&ptp_tx_tstamp->list_member, list_add(&ptp_tx_tstamp->list_member,
&tx_tstamp_caps->latches_free); &tx_tstamp_caps->latches_free);

View File

@ -12101,7 +12101,6 @@ static void ixgbe_remove(struct pci_dev *pdev)
devl_port_unregister(&adapter->devlink_port); devl_port_unregister(&adapter->devlink_port);
devl_unlock(adapter->devlink); devl_unlock(adapter->devlink);
devlink_free(adapter->devlink);
ixgbe_stop_ipsec_offload(adapter); ixgbe_stop_ipsec_offload(adapter);
ixgbe_clear_interrupt_scheme(adapter); ixgbe_clear_interrupt_scheme(adapter);
@ -12137,6 +12136,8 @@ static void ixgbe_remove(struct pci_dev *pdev)
if (disable_dev) if (disable_dev)
pci_disable_device(pdev); pci_disable_device(pdev);
devlink_free(adapter->devlink);
} }
/** /**

View File

@ -50,6 +50,9 @@ enum ixgbe_pfvf_api_rev {
ixgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */ ixgbe_mbox_api_12, /* API version 1.2, linux/freebsd VF driver */
ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */
ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */ ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */
ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */
ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */
ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */
/* This value should always be last */ /* This value should always be last */
ixgbe_mbox_api_unknown, /* indicates that API version is not known */ ixgbe_mbox_api_unknown, /* indicates that API version is not known */
}; };
@ -86,6 +89,12 @@ enum ixgbe_pfvf_api_rev {
#define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */ #define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */
/* mailbox API, version 1.6 VF requests */
#define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */
/* mailbox API, version 1.7 VF requests */
#define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF */
/* length of permanent address message returned from PF */ /* length of permanent address message returned from PF */
#define IXGBE_VF_PERMADDR_MSG_LEN 4 #define IXGBE_VF_PERMADDR_MSG_LEN 4
/* word in permanent address message with the current multicast type */ /* word in permanent address message with the current multicast type */
@ -96,6 +105,12 @@ enum ixgbe_pfvf_api_rev {
#define IXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */ #define IXGBE_VF_MBX_INIT_TIMEOUT 2000 /* number of retries on mailbox */
#define IXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */ #define IXGBE_VF_MBX_INIT_DELAY 500 /* microseconds between retries */
/* features negotiated between PF/VF */
#define IXGBEVF_PF_SUP_IPSEC BIT(0)
#define IXGBEVF_PF_SUP_ESX_MBX BIT(1)
#define IXGBE_SUPPORTED_FEATURES IXGBEVF_PF_SUP_IPSEC
struct ixgbe_hw; struct ixgbe_hw;
int ixgbe_read_mbx(struct ixgbe_hw *, u32 *, u16, u16); int ixgbe_read_mbx(struct ixgbe_hw *, u32 *, u16, u16);

View File

@ -510,6 +510,8 @@ static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
/* Version 1.1 supports jumbo frames on VFs if PF has /* Version 1.1 supports jumbo frames on VFs if PF has
* jumbo frames enabled which means legacy VFs are * jumbo frames enabled which means legacy VFs are
* disabled * disabled
@ -1046,6 +1048,8 @@ static int ixgbe_negotiate_vf_api(struct ixgbe_adapter *adapter,
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
adapter->vfinfo[vf].vf_api = api; adapter->vfinfo[vf].vf_api = api;
return 0; return 0;
default: default:
@ -1072,6 +1076,8 @@ static int ixgbe_get_vf_queues(struct ixgbe_adapter *adapter,
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break; break;
default: default:
return -1; return -1;
@ -1112,6 +1118,8 @@ static int ixgbe_get_vf_reta(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
/* verify the PF is supporting the correct API */ /* verify the PF is supporting the correct API */
switch (adapter->vfinfo[vf].vf_api) { switch (adapter->vfinfo[vf].vf_api) {
case ixgbe_mbox_api_17:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
@ -1145,6 +1153,8 @@ static int ixgbe_get_vf_rss_key(struct ixgbe_adapter *adapter,
/* verify the PF is supporting the correct API */ /* verify the PF is supporting the correct API */
switch (adapter->vfinfo[vf].vf_api) { switch (adapter->vfinfo[vf].vf_api) {
case ixgbe_mbox_api_17:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
@ -1174,6 +1184,8 @@ static int ixgbe_update_vf_xcast_mode(struct ixgbe_adapter *adapter,
fallthrough; fallthrough;
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break; break;
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
@ -1244,6 +1256,8 @@ static int ixgbe_get_vf_link_state(struct ixgbe_adapter *adapter,
case ixgbe_mbox_api_12: case ixgbe_mbox_api_12:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break; break;
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
@ -1254,6 +1268,65 @@ static int ixgbe_get_vf_link_state(struct ixgbe_adapter *adapter,
return 0; return 0;
} }
/**
* ixgbe_send_vf_link_status - send link status data to VF
* @adapter: pointer to adapter struct
* @msgbuf: pointer to message buffers
* @vf: VF identifier
*
* Reply for IXGBE_VF_GET_PF_LINK_STATE mbox command sending link status data.
*
* Return: 0 on success or -EOPNOTSUPP when operation is not supported.
*/
static int ixgbe_send_vf_link_status(struct ixgbe_adapter *adapter,
u32 *msgbuf, u32 vf)
{
struct ixgbe_hw *hw = &adapter->hw;
switch (adapter->vfinfo[vf].vf_api) {
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
if (hw->mac.type != ixgbe_mac_e610)
return -EOPNOTSUPP;
break;
default:
return -EOPNOTSUPP;
}
/* Simply provide stored values as watchdog & link status events take
* care of its freshness.
*/
msgbuf[1] = adapter->link_speed;
msgbuf[2] = adapter->link_up;
return 0;
}
/**
* ixgbe_negotiate_vf_features - negotiate supported features with VF driver
* @adapter: pointer to adapter struct
* @msgbuf: pointer to message buffers
* @vf: VF identifier
*
* Return: 0 on success or -EOPNOTSUPP when operation is not supported.
*/
static int ixgbe_negotiate_vf_features(struct ixgbe_adapter *adapter,
u32 *msgbuf, u32 vf)
{
u32 features = msgbuf[1];
switch (adapter->vfinfo[vf].vf_api) {
case ixgbe_mbox_api_17:
break;
default:
return -EOPNOTSUPP;
}
features &= IXGBE_SUPPORTED_FEATURES;
msgbuf[1] = features;
return 0;
}
static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf) static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
{ {
u32 mbx_size = IXGBE_VFMAILBOX_SIZE; u32 mbx_size = IXGBE_VFMAILBOX_SIZE;
@ -1328,6 +1401,12 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf)
case IXGBE_VF_IPSEC_DEL: case IXGBE_VF_IPSEC_DEL:
retval = ixgbe_ipsec_vf_del_sa(adapter, msgbuf, vf); retval = ixgbe_ipsec_vf_del_sa(adapter, msgbuf, vf);
break; break;
case IXGBE_VF_GET_PF_LINK_STATE:
retval = ixgbe_send_vf_link_status(adapter, msgbuf, vf);
break;
case IXGBE_VF_FEATURES_NEGOTIATE:
retval = ixgbe_negotiate_vf_features(adapter, msgbuf, vf);
break;
default: default:
e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]); e_err(drv, "Unhandled Msg %8.8x\n", msgbuf[0]);
retval = -EIO; retval = -EIO;

View File

@ -28,6 +28,7 @@
/* Link speed */ /* Link speed */
typedef u32 ixgbe_link_speed; typedef u32 ixgbe_link_speed;
#define IXGBE_LINK_SPEED_UNKNOWN 0
#define IXGBE_LINK_SPEED_1GB_FULL 0x0020 #define IXGBE_LINK_SPEED_1GB_FULL 0x0020
#define IXGBE_LINK_SPEED_10GB_FULL 0x0080 #define IXGBE_LINK_SPEED_10GB_FULL 0x0080
#define IXGBE_LINK_SPEED_100_FULL 0x0008 #define IXGBE_LINK_SPEED_100_FULL 0x0008

View File

@ -273,6 +273,9 @@ static int ixgbevf_ipsec_add_sa(struct net_device *dev,
adapter = netdev_priv(dev); adapter = netdev_priv(dev);
ipsec = adapter->ipsec; ipsec = adapter->ipsec;
if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC))
return -EOPNOTSUPP;
if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) { if (xs->id.proto != IPPROTO_ESP && xs->id.proto != IPPROTO_AH) {
NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for IPsec offload"); NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol for IPsec offload");
return -EINVAL; return -EINVAL;
@ -405,6 +408,9 @@ static void ixgbevf_ipsec_del_sa(struct net_device *dev,
adapter = netdev_priv(dev); adapter = netdev_priv(dev);
ipsec = adapter->ipsec; ipsec = adapter->ipsec;
if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC))
return;
if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) { if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX; sa_idx = xs->xso.offload_handle - IXGBE_IPSEC_BASE_RX_INDEX;
@ -612,6 +618,10 @@ void ixgbevf_init_ipsec_offload(struct ixgbevf_adapter *adapter)
size_t size; size_t size;
switch (adapter->hw.api_version) { switch (adapter->hw.api_version) {
case ixgbe_mbox_api_17:
if (!(adapter->pf_features & IXGBEVF_PF_SUP_IPSEC))
return;
break;
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
break; break;
default: default:

View File

@ -363,6 +363,13 @@ struct ixgbevf_adapter {
struct ixgbe_hw hw; struct ixgbe_hw hw;
u16 msg_enable; u16 msg_enable;
u32 pf_features;
#define IXGBEVF_PF_SUP_IPSEC BIT(0)
#define IXGBEVF_PF_SUP_ESX_MBX BIT(1)
#define IXGBEVF_SUPPORTED_FEATURES (IXGBEVF_PF_SUP_IPSEC | \
IXGBEVF_PF_SUP_ESX_MBX)
struct ixgbevf_hw_stats stats; struct ixgbevf_hw_stats stats;
unsigned long state; unsigned long state;

View File

@ -2271,10 +2271,36 @@ static void ixgbevf_init_last_counter_stats(struct ixgbevf_adapter *adapter)
adapter->stats.base_vfmprc = adapter->stats.last_vfmprc; adapter->stats.base_vfmprc = adapter->stats.last_vfmprc;
} }
/**
* ixgbevf_set_features - Set features supported by PF
* @adapter: pointer to the adapter struct
*
* Negotiate with PF supported features and then set pf_features accordingly.
*/
static void ixgbevf_set_features(struct ixgbevf_adapter *adapter)
{
u32 *pf_features = &adapter->pf_features;
struct ixgbe_hw *hw = &adapter->hw;
int err;
err = hw->mac.ops.negotiate_features(hw, pf_features);
if (err && err != -EOPNOTSUPP)
netdev_dbg(adapter->netdev,
"PF feature negotiation failed.\n");
/* Address also pre API 1.7 cases */
if (hw->api_version == ixgbe_mbox_api_14)
*pf_features |= IXGBEVF_PF_SUP_IPSEC;
else if (hw->api_version == ixgbe_mbox_api_15)
*pf_features |= IXGBEVF_PF_SUP_ESX_MBX;
}
static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter) static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter)
{ {
struct ixgbe_hw *hw = &adapter->hw; struct ixgbe_hw *hw = &adapter->hw;
static const int api[] = { static const int api[] = {
ixgbe_mbox_api_17,
ixgbe_mbox_api_16,
ixgbe_mbox_api_15, ixgbe_mbox_api_15,
ixgbe_mbox_api_14, ixgbe_mbox_api_14,
ixgbe_mbox_api_13, ixgbe_mbox_api_13,
@ -2294,7 +2320,9 @@ static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter)
idx++; idx++;
} }
if (hw->api_version >= ixgbe_mbox_api_15) { ixgbevf_set_features(adapter);
if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX) {
hw->mbx.ops.init_params(hw); hw->mbx.ops.init_params(hw);
memcpy(&hw->mbx.ops, &ixgbevf_mbx_ops, memcpy(&hw->mbx.ops, &ixgbevf_mbx_ops,
sizeof(struct ixgbe_mbx_operations)); sizeof(struct ixgbe_mbx_operations));
@ -2651,6 +2679,8 @@ static void ixgbevf_set_num_queues(struct ixgbevf_adapter *adapter)
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
if (adapter->xdp_prog && if (adapter->xdp_prog &&
hw->mac.max_tx_queues == rss) hw->mac.max_tx_queues == rss)
rss = rss > 3 ? 2 : 1; rss = rss > 3 ? 2 : 1;
@ -4645,6 +4675,8 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE - netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE -
(ETH_HLEN + ETH_FCS_LEN); (ETH_HLEN + ETH_FCS_LEN);
break; break;

View File

@ -66,6 +66,8 @@ enum ixgbe_pfvf_api_rev {
ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */ ixgbe_mbox_api_13, /* API version 1.3, linux/freebsd VF driver */
ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */ ixgbe_mbox_api_14, /* API version 1.4, linux/freebsd VF driver */
ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */ ixgbe_mbox_api_15, /* API version 1.5, linux/freebsd VF driver */
ixgbe_mbox_api_16, /* API version 1.6, linux/freebsd VF driver */
ixgbe_mbox_api_17, /* API version 1.7, linux/freebsd VF driver */
/* This value should always be last */ /* This value should always be last */
ixgbe_mbox_api_unknown, /* indicates that API version is not known */ ixgbe_mbox_api_unknown, /* indicates that API version is not known */
}; };
@ -102,6 +104,12 @@ enum ixgbe_pfvf_api_rev {
#define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */ #define IXGBE_VF_GET_LINK_STATE 0x10 /* get vf link state */
/* mailbox API, version 1.6 VF requests */
#define IXGBE_VF_GET_PF_LINK_STATE 0x11 /* request PF to send link info */
/* mailbox API, version 1.7 VF requests */
#define IXGBE_VF_FEATURES_NEGOTIATE 0x12 /* get features supported by PF*/
/* length of permanent address message returned from PF */ /* length of permanent address message returned from PF */
#define IXGBE_VF_PERMADDR_MSG_LEN 4 #define IXGBE_VF_PERMADDR_MSG_LEN 4
/* word in permanent address message with the current multicast type */ /* word in permanent address message with the current multicast type */

View File

@ -313,6 +313,8 @@ int ixgbevf_get_reta_locked(struct ixgbe_hw *hw, u32 *reta, int num_rx_queues)
* is not supported for this device type. * is not supported for this device type.
*/ */
switch (hw->api_version) { switch (hw->api_version) {
case ixgbe_mbox_api_17:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
@ -382,6 +384,8 @@ int ixgbevf_get_rss_key_locked(struct ixgbe_hw *hw, u8 *rss_key)
* or if the operation is not supported for this device type. * or if the operation is not supported for this device type.
*/ */
switch (hw->api_version) { switch (hw->api_version) {
case ixgbe_mbox_api_17:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
@ -552,6 +556,8 @@ static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw, int xcast_mode)
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break; break;
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
@ -624,6 +630,85 @@ static s32 ixgbevf_hv_get_link_state_vf(struct ixgbe_hw *hw, bool *link_state)
return -EOPNOTSUPP; return -EOPNOTSUPP;
} }
/**
* ixgbevf_get_pf_link_state - Get PF's link status
* @hw: pointer to the HW structure
* @speed: link speed
* @link_up: indicate if link is up/down
*
* Ask PF to provide link_up state and speed of the link.
*
* Return: IXGBE_ERR_MBX in the case of mailbox error,
* -EOPNOTSUPP if the op is not supported or 0 on success.
*/
static int ixgbevf_get_pf_link_state(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
bool *link_up)
{
u32 msgbuf[3] = {};
int err;
switch (hw->api_version) {
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break;
default:
return -EOPNOTSUPP;
}
msgbuf[0] = IXGBE_VF_GET_PF_LINK_STATE;
err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf,
ARRAY_SIZE(msgbuf));
if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) {
err = IXGBE_ERR_MBX;
*speed = IXGBE_LINK_SPEED_UNKNOWN;
/* No need to set @link_up to false as it will be done by
* ixgbe_check_mac_link_vf().
*/
} else {
*speed = msgbuf[1];
*link_up = msgbuf[2];
}
return err;
}
/**
* ixgbevf_negotiate_features_vf - negotiate supported features with PF driver
* @hw: pointer to the HW structure
* @pf_features: bitmask of features supported by PF
*
* Return: IXGBE_ERR_MBX in the case of mailbox error,
* -EOPNOTSUPP if the op is not supported or 0 on success.
*/
static int ixgbevf_negotiate_features_vf(struct ixgbe_hw *hw, u32 *pf_features)
{
u32 msgbuf[2] = {};
int err;
switch (hw->api_version) {
case ixgbe_mbox_api_17:
break;
default:
return -EOPNOTSUPP;
}
msgbuf[0] = IXGBE_VF_FEATURES_NEGOTIATE;
msgbuf[1] = IXGBEVF_SUPPORTED_FEATURES;
err = ixgbevf_write_msg_read_ack(hw, msgbuf, msgbuf,
ARRAY_SIZE(msgbuf));
if (err || (msgbuf[0] & IXGBE_VT_MSGTYPE_FAILURE)) {
err = IXGBE_ERR_MBX;
*pf_features = 0x0;
} else {
*pf_features = msgbuf[1];
}
return err;
}
/** /**
* ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address * ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address
* @hw: pointer to the HW structure * @hw: pointer to the HW structure
@ -658,6 +743,58 @@ mbx_err:
return err; return err;
} }
/**
* ixgbe_read_vflinks - Read VFLINKS register
* @hw: pointer to the HW structure
* @speed: link speed
* @link_up: indicate if link is up/down
*
* Get linkup status and link speed from the VFLINKS register.
*/
static void ixgbe_read_vflinks(struct ixgbe_hw *hw, ixgbe_link_speed *speed,
bool *link_up)
{
u32 vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
/* if link status is down no point in checking to see if PF is up */
if (!(vflinks & IXGBE_LINKS_UP)) {
*link_up = false;
return;
}
/* for SFP+ modules and DA cables on 82599 it can take up to 500usecs
* before the link status is correct
*/
if (hw->mac.type == ixgbe_mac_82599_vf) {
for (int i = 0; i < 5; i++) {
udelay(100);
vflinks = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
if (!(vflinks & IXGBE_LINKS_UP)) {
*link_up = false;
return;
}
}
}
/* We reached this point so there's link */
*link_up = true;
switch (vflinks & IXGBE_LINKS_SPEED_82599) {
case IXGBE_LINKS_SPEED_10G_82599:
*speed = IXGBE_LINK_SPEED_10GB_FULL;
break;
case IXGBE_LINKS_SPEED_1G_82599:
*speed = IXGBE_LINK_SPEED_1GB_FULL;
break;
case IXGBE_LINKS_SPEED_100_82599:
*speed = IXGBE_LINK_SPEED_100_FULL;
break;
default:
*speed = IXGBE_LINK_SPEED_UNKNOWN;
}
}
/** /**
* ixgbevf_hv_set_vfta_vf - * Hyper-V variant - just a stub. * ixgbevf_hv_set_vfta_vf - * Hyper-V variant - just a stub.
* @hw: unused * @hw: unused
@ -702,10 +839,10 @@ static s32 ixgbevf_check_mac_link_vf(struct ixgbe_hw *hw,
bool *link_up, bool *link_up,
bool autoneg_wait_to_complete) bool autoneg_wait_to_complete)
{ {
struct ixgbevf_adapter *adapter = hw->back;
struct ixgbe_mbx_info *mbx = &hw->mbx; struct ixgbe_mbx_info *mbx = &hw->mbx;
struct ixgbe_mac_info *mac = &hw->mac; struct ixgbe_mac_info *mac = &hw->mac;
s32 ret_val = 0; s32 ret_val = 0;
u32 links_reg;
u32 in_msg = 0; u32 in_msg = 0;
/* If we were hit with a reset drop the link */ /* If we were hit with a reset drop the link */
@ -715,43 +852,21 @@ static s32 ixgbevf_check_mac_link_vf(struct ixgbe_hw *hw,
if (!mac->get_link_status) if (!mac->get_link_status)
goto out; goto out;
/* if link status is down no point in checking to see if pf is up */ if (hw->mac.type == ixgbe_mac_e610_vf) {
links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS); ret_val = ixgbevf_get_pf_link_state(hw, speed, link_up);
if (!(links_reg & IXGBE_LINKS_UP)) if (ret_val)
goto out; goto out;
} else {
/* for SFP+ modules and DA cables on 82599 it can take up to 500usecs ixgbe_read_vflinks(hw, speed, link_up);
* before the link status is correct if (*link_up == false)
*/ goto out;
if (mac->type == ixgbe_mac_82599_vf) {
int i;
for (i = 0; i < 5; i++) {
udelay(100);
links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
if (!(links_reg & IXGBE_LINKS_UP))
goto out;
}
}
switch (links_reg & IXGBE_LINKS_SPEED_82599) {
case IXGBE_LINKS_SPEED_10G_82599:
*speed = IXGBE_LINK_SPEED_10GB_FULL;
break;
case IXGBE_LINKS_SPEED_1G_82599:
*speed = IXGBE_LINK_SPEED_1GB_FULL;
break;
case IXGBE_LINKS_SPEED_100_82599:
*speed = IXGBE_LINK_SPEED_100_FULL;
break;
} }
/* if the read failed it could just be a mailbox collision, best wait /* if the read failed it could just be a mailbox collision, best wait
* until we are called again and don't report an error * until we are called again and don't report an error
*/ */
if (mbx->ops.read(hw, &in_msg, 1)) { if (mbx->ops.read(hw, &in_msg, 1)) {
if (hw->api_version >= ixgbe_mbox_api_15) if (adapter->pf_features & IXGBEVF_PF_SUP_ESX_MBX)
mac->get_link_status = false; mac->get_link_status = false;
goto out; goto out;
} }
@ -951,6 +1066,8 @@ int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
case ixgbe_mbox_api_13: case ixgbe_mbox_api_13:
case ixgbe_mbox_api_14: case ixgbe_mbox_api_14:
case ixgbe_mbox_api_15: case ixgbe_mbox_api_15:
case ixgbe_mbox_api_16:
case ixgbe_mbox_api_17:
break; break;
default: default:
return 0; return 0;
@ -1005,6 +1122,7 @@ static const struct ixgbe_mac_operations ixgbevf_mac_ops = {
.setup_link = ixgbevf_setup_mac_link_vf, .setup_link = ixgbevf_setup_mac_link_vf,
.check_link = ixgbevf_check_mac_link_vf, .check_link = ixgbevf_check_mac_link_vf,
.negotiate_api_version = ixgbevf_negotiate_api_version_vf, .negotiate_api_version = ixgbevf_negotiate_api_version_vf,
.negotiate_features = ixgbevf_negotiate_features_vf,
.set_rar = ixgbevf_set_rar_vf, .set_rar = ixgbevf_set_rar_vf,
.update_mc_addr_list = ixgbevf_update_mc_addr_list_vf, .update_mc_addr_list = ixgbevf_update_mc_addr_list_vf,
.update_xcast_mode = ixgbevf_update_xcast_mode, .update_xcast_mode = ixgbevf_update_xcast_mode,

View File

@ -26,6 +26,7 @@ struct ixgbe_mac_operations {
s32 (*stop_adapter)(struct ixgbe_hw *); s32 (*stop_adapter)(struct ixgbe_hw *);
s32 (*get_bus_info)(struct ixgbe_hw *); s32 (*get_bus_info)(struct ixgbe_hw *);
s32 (*negotiate_api_version)(struct ixgbe_hw *hw, int api); s32 (*negotiate_api_version)(struct ixgbe_hw *hw, int api);
int (*negotiate_features)(struct ixgbe_hw *hw, u32 *pf_features);
/* Link */ /* Link */
s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool); s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool);

View File

@ -1981,6 +1981,7 @@ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
!is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) { !is_cgx_mapped_to_nix(pdev->subsystem_device, cgx->cgx_id)) {
dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n", dev_notice(dev, "CGX %d not mapped to NIX, skipping probe\n",
cgx->cgx_id); cgx->cgx_id);
err = -ENODEV;
goto err_release_regions; goto err_release_regions;
} }

View File

@ -677,7 +677,7 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev)
void *buf; void *buf;
int s; int s;
page = __dev_alloc_page(GFP_KERNEL); page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32);
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
@ -800,7 +800,7 @@ mtk_wed_hwrro_buffer_alloc(struct mtk_wed_device *dev)
struct page *page; struct page *page;
int s; int s;
page = __dev_alloc_page(GFP_KERNEL); page = __dev_alloc_page(GFP_KERNEL | GFP_DMA32);
if (!page) if (!page)
return -ENOMEM; return -ENOMEM;
@ -2426,6 +2426,10 @@ mtk_wed_attach(struct mtk_wed_device *dev)
dev->version = hw->version; dev->version = hw->version;
dev->hw->pcie_base = mtk_wed_get_pcie_base(dev); dev->hw->pcie_base = mtk_wed_get_pcie_base(dev);
ret = dma_set_mask_and_coherent(hw->dev, DMA_BIT_MASK(32));
if (ret)
goto out;
if (hw->eth->dma_dev == hw->eth->dev && if (hw->eth->dma_dev == hw->eth->dev &&
of_dma_is_coherent(hw->eth->dev->of_node)) of_dma_is_coherent(hw->eth->dev->of_node))
mtk_eth_set_dma_device(hw->eth, hw->dev); mtk_eth_set_dma_device(hw->eth, hw->dev);

View File

@ -4994,8 +4994,9 @@ static int rtl8169_resume(struct device *device)
if (!device_may_wakeup(tp_to_dev(tp))) if (!device_may_wakeup(tp_to_dev(tp)))
clk_prepare_enable(tp->clk); clk_prepare_enable(tp->clk);
/* Reportedly at least Asus X453MA truncates packets otherwise */ /* Some chip versions may truncate packets without this initialization */
if (tp->mac_version == RTL_GIGA_MAC_VER_37) if (tp->mac_version == RTL_GIGA_MAC_VER_37 ||
tp->mac_version == RTL_GIGA_MAC_VER_46)
rtl_init_rxcfg(tp); rtl_init_rxcfg(tp);
return rtl8169_runtime_resume(device); return rtl8169_runtime_resume(device);

View File

@ -545,6 +545,7 @@ static void nsim_enable_napi(struct netdevsim *ns)
static int nsim_open(struct net_device *dev) static int nsim_open(struct net_device *dev)
{ {
struct netdevsim *ns = netdev_priv(dev); struct netdevsim *ns = netdev_priv(dev);
struct netdevsim *peer;
int err; int err;
netdev_assert_locked(dev); netdev_assert_locked(dev);
@ -555,6 +556,12 @@ static int nsim_open(struct net_device *dev)
nsim_enable_napi(ns); nsim_enable_napi(ns);
peer = rtnl_dereference(ns->peer);
if (peer && netif_running(peer->netdev)) {
netif_carrier_on(dev);
netif_carrier_on(peer->netdev);
}
return 0; return 0;
} }

View File

@ -405,7 +405,7 @@ static int bcm5481x_set_brrmode(struct phy_device *phydev, bool on)
static int bcm54811_config_init(struct phy_device *phydev) static int bcm54811_config_init(struct phy_device *phydev)
{ {
struct bcm54xx_phy_priv *priv = phydev->priv; struct bcm54xx_phy_priv *priv = phydev->priv;
int err, reg, exp_sync_ethernet; int err, reg, exp_sync_ethernet, aux_rgmii_en;
/* Enable CLK125 MUX on LED4 if ref clock is enabled. */ /* Enable CLK125 MUX on LED4 if ref clock is enabled. */
if (!(phydev->dev_flags & PHY_BRCM_RX_REFCLK_UNUSED)) { if (!(phydev->dev_flags & PHY_BRCM_RX_REFCLK_UNUSED)) {
@ -434,6 +434,24 @@ static int bcm54811_config_init(struct phy_device *phydev)
if (err < 0) if (err < 0)
return err; return err;
/* Enable RGMII if configured */
if (phy_interface_is_rgmii(phydev))
aux_rgmii_en = MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN |
MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN;
else
aux_rgmii_en = 0;
/* Also writing Reserved bits 6:5 because the documentation requires
* them to be written to 0b11
*/
err = bcm54xx_auxctl_write(phydev,
MII_BCM54XX_AUXCTL_SHDWSEL_MISC,
MII_BCM54XX_AUXCTL_MISC_WREN |
aux_rgmii_en |
MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RSVD);
if (err < 0)
return err;
return bcm5481x_set_brrmode(phydev, priv->brr_mode); return bcm5481x_set_brrmode(phydev, priv->brr_mode);
} }

View File

@ -633,26 +633,25 @@ static int rtl8211f_config_init(struct phy_device *phydev)
str_enabled_disabled(val_rxdly)); str_enabled_disabled(val_rxdly));
} }
if (!priv->has_phycr2)
return 0;
/* Disable PHY-mode EEE so LPI is passed to the MAC */ /* Disable PHY-mode EEE so LPI is passed to the MAC */
ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, RTL8211F_PHYCR2, ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, RTL8211F_PHYCR2,
RTL8211F_PHYCR2_PHY_EEE_ENABLE, 0); RTL8211F_PHYCR2_PHY_EEE_ENABLE, 0);
if (ret) if (ret)
return ret; return ret;
if (priv->has_phycr2) { ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE,
ret = phy_modify_paged(phydev, RTL8211F_PHYCR_PAGE, RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN,
RTL8211F_PHYCR2, RTL8211F_CLKOUT_EN, priv->phycr2);
priv->phycr2); if (ret < 0) {
if (ret < 0) { dev_err(dev, "clkout configuration failed: %pe\n",
dev_err(dev, "clkout configuration failed: %pe\n", ERR_PTR(ret));
ERR_PTR(ret)); return ret;
return ret;
}
return genphy_soft_reset(phydev);
} }
return 0; return genphy_soft_reset(phydev);
} }
static int rtl821x_suspend(struct phy_device *phydev) static int rtl821x_suspend(struct phy_device *phydev)

View File

@ -1175,10 +1175,13 @@ static int lan78xx_write_raw_eeprom(struct lan78xx_net *dev, u32 offset,
} }
write_raw_eeprom_done: write_raw_eeprom_done:
if (dev->chipid == ID_REV_CHIP_ID_7800_) if (dev->chipid == ID_REV_CHIP_ID_7800_) {
return lan78xx_write_reg(dev, HW_CFG, saved); int rc = lan78xx_write_reg(dev, HW_CFG, saved);
/* If USB fails, there is nothing to do */
return 0; if (rc < 0)
return rc;
}
return ret;
} }
static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset, static int lan78xx_read_raw_otp(struct lan78xx_net *dev, u32 offset,
@ -3247,10 +3250,6 @@ static int lan78xx_reset(struct lan78xx_net *dev)
} }
} while (buf & HW_CFG_LRST_); } while (buf & HW_CFG_LRST_);
ret = lan78xx_init_mac_address(dev);
if (ret < 0)
return ret;
/* save DEVID for later usage */ /* save DEVID for later usage */
ret = lan78xx_read_reg(dev, ID_REV, &buf); ret = lan78xx_read_reg(dev, ID_REV, &buf);
if (ret < 0) if (ret < 0)
@ -3259,6 +3258,10 @@ static int lan78xx_reset(struct lan78xx_net *dev)
dev->chipid = (buf & ID_REV_CHIP_ID_MASK_) >> 16; dev->chipid = (buf & ID_REV_CHIP_ID_MASK_) >> 16;
dev->chiprev = buf & ID_REV_CHIP_REV_MASK_; dev->chiprev = buf & ID_REV_CHIP_REV_MASK_;
ret = lan78xx_init_mac_address(dev);
if (ret < 0)
return ret;
/* Respond to the IN token with a NAK */ /* Respond to the IN token with a NAK */
ret = lan78xx_read_reg(dev, USB_CFG0, &buf); ret = lan78xx_read_reg(dev, USB_CFG0, &buf);
if (ret < 0) if (ret < 0)

View File

@ -10122,7 +10122,12 @@ static int __init rtl8152_driver_init(void)
ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE); ret = usb_register_device_driver(&rtl8152_cfgselector_driver, THIS_MODULE);
if (ret) if (ret)
return ret; return ret;
return usb_register(&rtl8152_driver);
ret = usb_register(&rtl8152_driver);
if (ret)
usb_deregister_device_driver(&rtl8152_cfgselector_driver);
return ret;
} }
static void __exit rtl8152_driver_exit(void) static void __exit rtl8152_driver_exit(void)

View File

@ -702,6 +702,7 @@ void usbnet_resume_rx(struct usbnet *dev)
struct sk_buff *skb; struct sk_buff *skb;
int num = 0; int num = 0;
local_bh_disable();
clear_bit(EVENT_RX_PAUSED, &dev->flags); clear_bit(EVENT_RX_PAUSED, &dev->flags);
while ((skb = skb_dequeue(&dev->rxq_pause)) != NULL) { while ((skb = skb_dequeue(&dev->rxq_pause)) != NULL) {
@ -710,6 +711,7 @@ void usbnet_resume_rx(struct usbnet *dev)
} }
queue_work(system_bh_wq, &dev->bh_work); queue_work(system_bh_wq, &dev->bh_work);
local_bh_enable();
netif_dbg(dev, rx_status, dev->net, netif_dbg(dev, rx_status, dev->net,
"paused rx queue disabled, %d skbs requeued\n", num); "paused rx queue disabled, %d skbs requeued\n", num);

View File

@ -137,6 +137,7 @@
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC 0x07 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC 0x07
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_WIRESPEED_EN 0x0010 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_WIRESPEED_EN 0x0010
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RSVD 0x0060
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN 0x0080 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_EN 0x0080
#define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN 0x0100 #define MII_BCM54XX_AUXCTL_SHDWSEL_MISC_RGMII_SKEW_EN 0x0100
#define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX 0x0200 #define MII_BCM54XX_AUXCTL_MISC_FORCE_AMDIX 0x0200

View File

@ -611,6 +611,21 @@ struct metadata_dst *iptunnel_metadata_reply(struct metadata_dst *md,
int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst, int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
int headroom, bool reply); int headroom, bool reply);
static inline void ip_tunnel_adj_headroom(struct net_device *dev,
unsigned int headroom)
{
/* we must cap headroom to some upperlimit, else pskb_expand_head
* will overflow header offsets in skb_headers_offset_update().
*/
const unsigned int max_allowed = 512;
if (headroom > max_allowed)
headroom = max_allowed;
if (headroom > READ_ONCE(dev->needed_headroom))
WRITE_ONCE(dev->needed_headroom, headroom);
}
int iptunnel_handle_offloads(struct sk_buff *skb, int gso_type_mask); int iptunnel_handle_offloads(struct sk_buff *skb, int gso_type_mask);
static inline int iptunnel_pull_offloads(struct sk_buff *skb) static inline int iptunnel_pull_offloads(struct sk_buff *skb)

View File

@ -378,6 +378,8 @@ static int j1939_netdev_notify(struct notifier_block *nb,
j1939_ecu_unmap_all(priv); j1939_ecu_unmap_all(priv);
break; break;
case NETDEV_UNREGISTER: case NETDEV_UNREGISTER:
j1939_cancel_active_session(priv, NULL);
j1939_sk_netdev_event_netdown(priv);
j1939_sk_netdev_event_unregister(priv); j1939_sk_netdev_event_unregister(priv);
break; break;
} }

View File

@ -12176,6 +12176,35 @@ static void dev_memory_provider_uninstall(struct net_device *dev)
} }
} }
/* devices must be UP and netdev_lock()'d */
static void netif_close_many_and_unlock(struct list_head *close_head)
{
struct net_device *dev, *tmp;
netif_close_many(close_head, false);
/* ... now unlock them */
list_for_each_entry_safe(dev, tmp, close_head, close_list) {
netdev_unlock(dev);
list_del_init(&dev->close_list);
}
}
static void netif_close_many_and_unlock_cond(struct list_head *close_head)
{
#ifdef CONFIG_LOCKDEP
/* We can only track up to MAX_LOCK_DEPTH locks per task.
*
* Reserve half the available slots for additional locks possibly
* taken by notifiers and (soft)irqs.
*/
unsigned int limit = MAX_LOCK_DEPTH / 2;
if (lockdep_depth(current) > limit)
netif_close_many_and_unlock(close_head);
#endif
}
void unregister_netdevice_many_notify(struct list_head *head, void unregister_netdevice_many_notify(struct list_head *head,
u32 portid, const struct nlmsghdr *nlh) u32 portid, const struct nlmsghdr *nlh)
{ {
@ -12208,17 +12237,18 @@ void unregister_netdevice_many_notify(struct list_head *head,
/* If device is running, close it first. Start with ops locked... */ /* If device is running, close it first. Start with ops locked... */
list_for_each_entry(dev, head, unreg_list) { list_for_each_entry(dev, head, unreg_list) {
if (!(dev->flags & IFF_UP))
continue;
if (netdev_need_ops_lock(dev)) { if (netdev_need_ops_lock(dev)) {
list_add_tail(&dev->close_list, &close_head); list_add_tail(&dev->close_list, &close_head);
netdev_lock(dev); netdev_lock(dev);
} }
netif_close_many_and_unlock_cond(&close_head);
} }
netif_close_many(&close_head, true); netif_close_many_and_unlock(&close_head);
/* ... now unlock them and go over the rest. */ /* ... now go over the rest. */
list_for_each_entry(dev, head, unreg_list) { list_for_each_entry(dev, head, unreg_list) {
if (netdev_need_ops_lock(dev)) if (!netdev_need_ops_lock(dev))
netdev_unlock(dev);
else
list_add_tail(&dev->close_list, &close_head); list_add_tail(&dev->close_list, &close_head);
} }
netif_close_many(&close_head, true); netif_close_many(&close_head, true);

View File

@ -8,11 +8,13 @@
struct gro_cell { struct gro_cell {
struct sk_buff_head napi_skbs; struct sk_buff_head napi_skbs;
struct napi_struct napi; struct napi_struct napi;
local_lock_t bh_lock;
}; };
int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb) int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
{ {
struct net_device *dev = skb->dev; struct net_device *dev = skb->dev;
bool have_bh_lock = false;
struct gro_cell *cell; struct gro_cell *cell;
int res; int res;
@ -25,6 +27,8 @@ int gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
goto unlock; goto unlock;
} }
local_lock_nested_bh(&gcells->cells->bh_lock);
have_bh_lock = true;
cell = this_cpu_ptr(gcells->cells); cell = this_cpu_ptr(gcells->cells);
if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(net_hotdata.max_backlog)) { if (skb_queue_len(&cell->napi_skbs) > READ_ONCE(net_hotdata.max_backlog)) {
@ -39,6 +43,9 @@ drop:
if (skb_queue_len(&cell->napi_skbs) == 1) if (skb_queue_len(&cell->napi_skbs) == 1)
napi_schedule(&cell->napi); napi_schedule(&cell->napi);
if (have_bh_lock)
local_unlock_nested_bh(&gcells->cells->bh_lock);
res = NET_RX_SUCCESS; res = NET_RX_SUCCESS;
unlock: unlock:
@ -54,6 +61,7 @@ static int gro_cell_poll(struct napi_struct *napi, int budget)
struct sk_buff *skb; struct sk_buff *skb;
int work_done = 0; int work_done = 0;
__local_lock_nested_bh(&cell->bh_lock);
while (work_done < budget) { while (work_done < budget) {
skb = __skb_dequeue(&cell->napi_skbs); skb = __skb_dequeue(&cell->napi_skbs);
if (!skb) if (!skb)
@ -64,6 +72,7 @@ static int gro_cell_poll(struct napi_struct *napi, int budget)
if (work_done < budget) if (work_done < budget)
napi_complete_done(napi, work_done); napi_complete_done(napi, work_done);
__local_unlock_nested_bh(&cell->bh_lock);
return work_done; return work_done;
} }
@ -79,6 +88,7 @@ int gro_cells_init(struct gro_cells *gcells, struct net_device *dev)
struct gro_cell *cell = per_cpu_ptr(gcells->cells, i); struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
__skb_queue_head_init(&cell->napi_skbs); __skb_queue_head_init(&cell->napi_skbs);
local_lock_init(&cell->bh_lock);
set_bit(NAPI_STATE_NO_BUSY_POLL, &cell->napi.state); set_bit(NAPI_STATE_NO_BUSY_POLL, &cell->napi.state);

View File

@ -7200,6 +7200,7 @@ nodefer: kfree_skb_napi_cache(skb);
DEBUG_NET_WARN_ON_ONCE(skb_dst(skb)); DEBUG_NET_WARN_ON_ONCE(skb_dst(skb));
DEBUG_NET_WARN_ON_ONCE(skb->destructor); DEBUG_NET_WARN_ON_ONCE(skb->destructor);
DEBUG_NET_WARN_ON_ONCE(skb_nfct(skb));
sdn = per_cpu_ptr(net_hotdata.skb_defer_nodes, cpu) + numa_node_id(); sdn = per_cpu_ptr(net_hotdata.skb_defer_nodes, cpu) + numa_node_id();

View File

@ -568,20 +568,6 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
return 0; return 0;
} }
static void ip_tunnel_adj_headroom(struct net_device *dev, unsigned int headroom)
{
/* we must cap headroom to some upperlimit, else pskb_expand_head
* will overflow header offsets in skb_headers_offset_update().
*/
static const unsigned int max_allowed = 512;
if (headroom > max_allowed)
headroom = max_allowed;
if (headroom > READ_ONCE(dev->needed_headroom))
WRITE_ONCE(dev->needed_headroom, headroom);
}
void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
u8 proto, int tunnel_hlen) u8 proto, int tunnel_hlen)
{ {

View File

@ -2369,7 +2369,8 @@ static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
u32 max_segs) u32 max_segs)
{ {
const struct inet_connection_sock *icsk = inet_csk(sk); const struct inet_connection_sock *icsk = inet_csk(sk);
u32 send_win, cong_win, limit, in_flight; u32 send_win, cong_win, limit, in_flight, threshold;
u64 srtt_in_ns, expected_ack, how_far_is_the_ack;
struct tcp_sock *tp = tcp_sk(sk); struct tcp_sock *tp = tcp_sk(sk);
struct sk_buff *head; struct sk_buff *head;
int win_divisor; int win_divisor;
@ -2431,9 +2432,19 @@ static bool tcp_tso_should_defer(struct sock *sk, struct sk_buff *skb,
head = tcp_rtx_queue_head(sk); head = tcp_rtx_queue_head(sk);
if (!head) if (!head)
goto send_now; goto send_now;
delta = tp->tcp_clock_cache - head->tstamp;
/* If next ACK is likely to come too late (half srtt), do not defer */ srtt_in_ns = (u64)(NSEC_PER_USEC >> 3) * tp->srtt_us;
if ((s64)(delta - (u64)NSEC_PER_USEC * (tp->srtt_us >> 4)) < 0) /* When is the ACK expected ? */
expected_ack = head->tstamp + srtt_in_ns;
/* How far from now is the ACK expected ? */
how_far_is_the_ack = expected_ack - tp->tcp_clock_cache;
/* If next ACK is likely to come too late,
* ie in more than min(1ms, half srtt), do not defer.
*/
threshold = min(srtt_in_ns >> 1, NSEC_PER_MSEC);
if ((s64)(how_far_is_the_ack - threshold) > 0)
goto send_now; goto send_now;
/* Ok, it looks like it is advisable to defer. /* Ok, it looks like it is advisable to defer.

View File

@ -1851,8 +1851,6 @@ void skb_consume_udp(struct sock *sk, struct sk_buff *skb, int len)
sk_peek_offset_bwd(sk, len); sk_peek_offset_bwd(sk, len);
if (!skb_shared(skb)) { if (!skb_shared(skb)) {
if (unlikely(udp_skb_has_head_state(skb)))
skb_release_head_state(skb);
skb_attempt_defer_free(skb); skb_attempt_defer_free(skb);
return; return;
} }

View File

@ -1257,8 +1257,7 @@ route_lookup:
*/ */
max_headroom = LL_RESERVED_SPACE(tdev) + sizeof(struct ipv6hdr) max_headroom = LL_RESERVED_SPACE(tdev) + sizeof(struct ipv6hdr)
+ dst->header_len + t->hlen; + dst->header_len + t->hlen;
if (max_headroom > READ_ONCE(dev->needed_headroom)) ip_tunnel_adj_headroom(dev, max_headroom);
WRITE_ONCE(dev->needed_headroom, max_headroom);
err = ip6_tnl_encap(skb, t, &proto, fl6); err = ip6_tnl_encap(skb, t, &proto, fl6);
if (err) if (err)

View File

@ -255,12 +255,9 @@ int tls_process_cmsg(struct sock *sk, struct msghdr *msg,
if (msg->msg_flags & MSG_MORE) if (msg->msg_flags & MSG_MORE)
return -EINVAL; return -EINVAL;
rc = tls_handle_open_record(sk, msg->msg_flags);
if (rc)
return rc;
*record_type = *(unsigned char *)CMSG_DATA(cmsg); *record_type = *(unsigned char *)CMSG_DATA(cmsg);
rc = 0;
rc = tls_handle_open_record(sk, msg->msg_flags);
break; break;
default: default:
return -EINVAL; return -EINVAL;

View File

@ -1054,7 +1054,7 @@ static int tls_sw_sendmsg_locked(struct sock *sk, struct msghdr *msg,
if (ret == -EINPROGRESS) if (ret == -EINPROGRESS)
num_async++; num_async++;
else if (ret != -EAGAIN) else if (ret != -EAGAIN)
goto send_end; goto end;
} }
} }
@ -1112,8 +1112,11 @@ alloc_encrypted:
goto send_end; goto send_end;
tls_ctx->pending_open_record_frags = true; tls_ctx->pending_open_record_frags = true;
if (sk_msg_full(msg_pl)) if (sk_msg_full(msg_pl)) {
full_record = true; full_record = true;
sk_msg_trim(sk, msg_en,
msg_pl->sg.size + prot->overhead_size);
}
if (full_record || eor) if (full_record || eor)
goto copied; goto copied;
@ -1149,6 +1152,13 @@ alloc_encrypted:
} else if (ret != -EAGAIN) } else if (ret != -EAGAIN)
goto send_end; goto send_end;
} }
/* Transmit if any encryptions have completed */
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
cancel_delayed_work(&ctx->tx_work.work);
tls_tx_records(sk, msg->msg_flags);
}
continue; continue;
rollback_iter: rollback_iter:
copied -= try_to_copy; copied -= try_to_copy;
@ -1204,6 +1214,12 @@ copied:
goto send_end; goto send_end;
} }
} }
/* Transmit if any encryptions have completed */
if (test_and_clear_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask)) {
cancel_delayed_work(&ctx->tx_work.work);
tls_tx_records(sk, msg->msg_flags);
}
} }
continue; continue;
@ -1223,8 +1239,9 @@ trim_sgl:
goto alloc_encrypted; goto alloc_encrypted;
} }
send_end:
if (!num_async) { if (!num_async) {
goto send_end; goto end;
} else if (num_zc || eor) { } else if (num_zc || eor) {
int err; int err;
@ -1242,7 +1259,7 @@ trim_sgl:
tls_tx_records(sk, msg->msg_flags); tls_tx_records(sk, msg->msg_flags);
} }
send_end: end:
ret = sk_stream_error(sk, msg->msg_flags, ret); ret = sk_stream_error(sk, msg->msg_flags, ret);
return copied > 0 ? copied : ret; return copied > 0 ? copied : ret;
} }
@ -1637,8 +1654,10 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov,
if (unlikely(darg->async)) { if (unlikely(darg->async)) {
err = tls_strp_msg_hold(&ctx->strp, &ctx->async_hold); err = tls_strp_msg_hold(&ctx->strp, &ctx->async_hold);
if (err) if (err) {
__skb_queue_tail(&ctx->async_hold, darg->skb); err = tls_decrypt_async_wait(ctx);
darg->async = false;
}
return err; return err;
} }

View File

@ -1,5 +1,13 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
"""
Driver test environment (hardware-only tests).
NetDrvEnv and NetDrvEpEnv are the main environment classes.
Former is for local host only tests, latter creates / connects
to a remote endpoint. See NIPA wiki for more information about
running and writing driver tests.
"""
import sys import sys
from pathlib import Path from pathlib import Path
@ -8,26 +16,36 @@ KSFT_DIR = (Path(__file__).parent / "../../../../..").resolve()
try: try:
sys.path.append(KSFT_DIR.as_posix()) sys.path.append(KSFT_DIR.as_posix())
from net.lib.py import *
from drivers.net.lib.py import *
# Import one by one to avoid pylint false positives # Import one by one to avoid pylint false positives
from net.lib.py import NetNS, NetNSEnter, NetdevSimDev
from net.lib.py import EthtoolFamily, NetdevFamily, NetshaperFamily, \ from net.lib.py import EthtoolFamily, NetdevFamily, NetshaperFamily, \
NlError, RtnlFamily, DevlinkFamily, PSPFamily NlError, RtnlFamily, DevlinkFamily, PSPFamily
from net.lib.py import CmdExitFailure from net.lib.py import CmdExitFailure
from net.lib.py import bkg, cmd, defer, ethtool, fd_read_timeout, ip, \ from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \
rand_port, tool, wait_port_listen fd_read_timeout, ip, rand_port, wait_port_listen, wait_file
from net.lib.py import fd_read_timeout
from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx
from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \ from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \
ksft_setup ksft_setup
from net.lib.py import ksft_eq, ksft_ge, ksft_in, ksft_is, ksft_lt, \ from net.lib.py import ksft_eq, ksft_ge, ksft_in, ksft_is, ksft_lt, \
ksft_ne, ksft_not_in, ksft_raises, ksft_true, ksft_gt, ksft_not_none ksft_ne, ksft_not_in, ksft_raises, ksft_true, ksft_gt, ksft_not_none
from net.lib.py import NetNSEnter from drivers.net.lib.py import GenerateTraffic, Remote
from drivers.net.lib.py import GenerateTraffic
from drivers.net.lib.py import NetDrvEnv, NetDrvEpEnv from drivers.net.lib.py import NetDrvEnv, NetDrvEpEnv
__all__ = ["NetNS", "NetNSEnter", "NetdevSimDev",
"EthtoolFamily", "NetdevFamily", "NetshaperFamily",
"NlError", "RtnlFamily", "DevlinkFamily", "PSPFamily",
"CmdExitFailure",
"bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool",
"fd_read_timeout", "ip", "rand_port",
"wait_port_listen", "wait_file",
"KsftSkipEx", "KsftFailEx", "KsftXfailEx",
"ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run",
"ksft_setup",
"ksft_eq", "ksft_ge", "ksft_in", "ksft_is", "ksft_lt",
"ksft_ne", "ksft_not_in", "ksft_raises", "ksft_true", "ksft_gt",
"ksft_not_none", "ksft_not_none",
"NetDrvEnv", "NetDrvEpEnv", "GenerateTraffic", "Remote"]
except ModuleNotFoundError as e: except ModuleNotFoundError as e:
ksft_pr("Failed importing `net` library from kernel sources") print("Failed importing `net` library from kernel sources")
ksft_pr(str(e)) print(str(e))
ktap_result(True, comment="SKIP")
sys.exit(4) sys.exit(4)

View File

@ -22,7 +22,7 @@ try:
NlError, RtnlFamily, DevlinkFamily, PSPFamily NlError, RtnlFamily, DevlinkFamily, PSPFamily
from net.lib.py import CmdExitFailure from net.lib.py import CmdExitFailure
from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \ from net.lib.py import bkg, cmd, bpftool, bpftrace, defer, ethtool, \
fd_read_timeout, ip, rand_port, tool, wait_port_listen, wait_file fd_read_timeout, ip, rand_port, wait_port_listen, wait_file
from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx from net.lib.py import KsftSkipEx, KsftFailEx, KsftXfailEx
from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \ from net.lib.py import ksft_disruptive, ksft_exit, ksft_pr, ksft_run, \
ksft_setup ksft_setup
@ -34,7 +34,7 @@ try:
"NlError", "RtnlFamily", "DevlinkFamily", "PSPFamily", "NlError", "RtnlFamily", "DevlinkFamily", "PSPFamily",
"CmdExitFailure", "CmdExitFailure",
"bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool", "bkg", "cmd", "bpftool", "bpftrace", "defer", "ethtool",
"fd_read_timeout", "ip", "rand_port", "tool", "fd_read_timeout", "ip", "rand_port",
"wait_port_listen", "wait_file", "wait_port_listen", "wait_file",
"KsftSkipEx", "KsftFailEx", "KsftXfailEx", "KsftSkipEx", "KsftFailEx", "KsftXfailEx",
"ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run", "ksft_disruptive", "ksft_exit", "ksft_pr", "ksft_run",

View File

@ -1,9 +1,32 @@
# SPDX-License-Identifier: GPL-2.0 # SPDX-License-Identifier: GPL-2.0
"""
Python selftest helpers for netdev.
"""
from .consts import KSRC from .consts import KSRC
from .ksft import * from .ksft import KsftFailEx, KsftSkipEx, KsftXfailEx, ksft_pr, ksft_eq, \
ksft_ne, ksft_true, ksft_not_none, ksft_in, ksft_not_in, ksft_is, \
ksft_ge, ksft_gt, ksft_lt, ksft_raises, ksft_busy_wait, \
ktap_result, ksft_disruptive, ksft_setup, ksft_run, ksft_exit
from .netns import NetNS, NetNSEnter from .netns import NetNS, NetNSEnter
from .nsim import * from .nsim import NetdevSim, NetdevSimDev
from .utils import * from .utils import CmdExitFailure, fd_read_timeout, cmd, bkg, defer, \
bpftool, ip, ethtool, bpftrace, rand_port, wait_port_listen, wait_file
from .ynl import NlError, YnlFamily, EthtoolFamily, NetdevFamily, RtnlFamily, RtnlAddrFamily from .ynl import NlError, YnlFamily, EthtoolFamily, NetdevFamily, RtnlFamily, RtnlAddrFamily
from .ynl import NetshaperFamily, DevlinkFamily, PSPFamily from .ynl import NetshaperFamily, DevlinkFamily, PSPFamily
__all__ = ["KSRC",
"KsftFailEx", "KsftSkipEx", "KsftXfailEx", "ksft_pr", "ksft_eq",
"ksft_ne", "ksft_true", "ksft_not_none", "ksft_in", "ksft_not_in",
"ksft_is", "ksft_ge", "ksft_gt", "ksft_lt", "ksft_raises",
"ksft_busy_wait", "ktap_result", "ksft_disruptive", "ksft_setup",
"ksft_run", "ksft_exit",
"NetNS", "NetNSEnter",
"CmdExitFailure", "fd_read_timeout", "cmd", "bkg", "defer",
"bpftool", "ip", "ethtool", "bpftrace", "rand_port",
"wait_port_listen", "wait_file",
"NetdevSim", "NetdevSimDev",
"NetshaperFamily", "DevlinkFamily", "PSPFamily", "NlError",
"YnlFamily", "EthtoolFamily", "NetdevFamily", "RtnlFamily",
"RtnlAddrFamily"]

View File

@ -1466,6 +1466,8 @@ usage: ${0##*/} OPTS
EOF EOF
} }
require_command jq
#check for needed privileges #check for needed privileges
if [ "$(id -u)" -ne 0 ];then if [ "$(id -u)" -ne 0 ];then
end_test "SKIP: Need root privileges" end_test "SKIP: Need root privileges"

View File

@ -564,6 +564,40 @@ TEST_F(tls, msg_more)
EXPECT_EQ(memcmp(buf, test_str, send_len), 0); EXPECT_EQ(memcmp(buf, test_str, send_len), 0);
} }
TEST_F(tls, cmsg_msg_more)
{
char *test_str = "test_read";
char record_type = 100;
int send_len = 10;
/* we don't allow MSG_MORE with non-DATA records */
EXPECT_EQ(tls_send_cmsg(self->fd, record_type, test_str, send_len,
MSG_MORE), -1);
EXPECT_EQ(errno, EINVAL);
}
TEST_F(tls, msg_more_then_cmsg)
{
char *test_str = "test_read";
char record_type = 100;
int send_len = 10;
char buf[10 * 2];
int ret;
EXPECT_EQ(send(self->fd, test_str, send_len, MSG_MORE), send_len);
EXPECT_EQ(recv(self->cfd, buf, send_len, MSG_DONTWAIT), -1);
ret = tls_send_cmsg(self->fd, record_type, test_str, send_len, 0);
EXPECT_EQ(ret, send_len);
/* initial DATA record didn't get merged with the non-DATA record */
EXPECT_EQ(recv(self->cfd, buf, send_len * 2, 0), send_len);
EXPECT_EQ(tls_recv_cmsg(_metadata, self->cfd, record_type,
buf, sizeof(buf), MSG_WAITALL),
send_len);
}
TEST_F(tls, msg_more_unsent) TEST_F(tls, msg_more_unsent)
{ {
char const *test_str = "test_read"; char const *test_str = "test_read";
@ -912,6 +946,37 @@ TEST_F(tls, peek_and_splice)
EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0); EXPECT_EQ(memcmp(mem_send, mem_recv, send_len), 0);
} }
#define MAX_FRAGS 48
TEST_F(tls, splice_short)
{
struct iovec sendchar_iov;
char read_buf[0x10000];
char sendbuf[0x100];
char sendchar = 'S';
int pipefds[2];
int i;
sendchar_iov.iov_base = &sendchar;
sendchar_iov.iov_len = 1;
memset(sendbuf, 's', sizeof(sendbuf));
ASSERT_GE(pipe2(pipefds, O_NONBLOCK), 0);
ASSERT_GE(fcntl(pipefds[0], F_SETPIPE_SZ, (MAX_FRAGS + 1) * 0x1000), 0);
for (i = 0; i < MAX_FRAGS; i++)
ASSERT_GE(vmsplice(pipefds[1], &sendchar_iov, 1, 0), 0);
ASSERT_EQ(write(pipefds[1], sendbuf, sizeof(sendbuf)), sizeof(sendbuf));
EXPECT_EQ(splice(pipefds[0], NULL, self->fd, NULL, MAX_FRAGS + 0x1000, 0),
MAX_FRAGS + sizeof(sendbuf));
EXPECT_EQ(recv(self->cfd, read_buf, sizeof(read_buf), 0), MAX_FRAGS + sizeof(sendbuf));
EXPECT_EQ(recv(self->cfd, read_buf, sizeof(read_buf), MSG_DONTWAIT), -1);
EXPECT_EQ(errno, EAGAIN);
}
#undef MAX_FRAGS
TEST_F(tls, recvmsg_single) TEST_F(tls, recvmsg_single)
{ {
char const *test_str = "test_recvmsg_single"; char const *test_str = "test_recvmsg_single";

View File

@ -249,6 +249,8 @@ test_binding_toggle_off_when_upper_down()
do_test_binding_off : "on->off when upper down" do_test_binding_off : "on->off when upper down"
} }
require_command jq
trap defer_scopes_cleanup EXIT trap defer_scopes_cleanup EXIT
setup_prepare setup_prepare
tests_run tests_run