More power management updates for 6.18-rc1

- Make cpufreq drivers setting the default CPU transition latency to
    CPUFREQ_ETERNAL specify a proper default transition latency value
    instead which addresses a regression introduced during the 6.6 cycle
    that broke CPUFREQ_ETERNAL handling (Rafael Wysocki)
 
  - Make the cpufreq CPPC driver use a proper transition delay value
    when CPUFREQ_ETERNAL is returned by cppc_get_transition_latency() to
    indicate an error condition (Rafael Wysocki)
 
  - Make cppc_get_transition_latency() return a negative error code to
    indicate error conditions instead of using CPUFREQ_ETERNAL for this
    purpose and drop CPUFREQ_ETERNAL that has no other users (Rafael
    Wysocki, Gopi Krishna Menon)
 
  - Fix device leak in the mediatek cpufreq driver (Johan Hovold)
 
  - Set target frequency on all CPUs sharing a policy during frequency
    updates in the tegra186 cpufreq driver and make it initialize all
    cores to max frequencies (Aaron Kling)
 
  - Rust cpufreq helper cleanup (Thorsten Blum)
 
  - Make pm_runtime_put*() family of functions return 1 when the
    given device is already suspended which is consistent with the
    documentation (Brian Norris)
 
  - Add basic kunit tests for runtime PM API contracts and update return
    values in kerneldoc comments for the runtime PM API (Brian Norris,
    Dan Carpenter)
 
  - Add auto-cleanup macros for runtime PM "resume and get" and "get
    without resume" operations, use one of them in the PCI core and
    drop the existing "free" macro introduced for similar purpose, but
    somewhat cumbersome to use (Rafael Wysocki)
 
  - Make the core power management code avoid waiting on device links
    marked as SYNC_STATE_ONLY which is consistent with the handling of
    those device links elsewhere (Pin-yen Lin)
 -----BEGIN PGP SIGNATURE-----
 
 iQFGBAABCAAwFiEEcM8Aw/RY0dgsiRUR7l+9nS/U47UFAmjk8hUSHHJqd0Byand5
 c29ja2kubmV0AAoJEO5fvZ0v1OO1PtgH/0AwdSCX8uI44n/EnyjLlQFWOdNSpXI4
 zAIReNLPP0skWrNwf5ivHYKPEWSeo0o0OjHiCjdTCC3uT8FxCLmjjlPS43zhhHem
 41YQFRJb6GpwD86Vog25q5GSPOQORvRGYV+ZGlMesah0cE4qh4LxgVSNtftm9z7b
 CjMFOeFEAAHFNGEEX4U2/GE+PFdbBGMjSDSfyLazKAKjZS436SOGvpP7NUTt0m9C
 kxO2JLJuQXph5lqyzDRAUu1yOseEwM/f6Y5wWs1z/T5GeIacub/KypZlxhsjWALf
 VxdXfvvLehidS747pxOzXPFhP8sXw1a4XdBNqlVGi/YjF7S9OuAxRMg=
 =RfuO
 -----END PGP SIGNATURE-----

Merge tag 'pm-6.18-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull more power management updates from Rafael Wysocki:
 "These are cpufreq fixes and cleanups on top of the material merged
  previously, a power management core code fix and updates of the
  runtime PM framework including unit tests, documentation updates and
  introduction of auto-cleanup macros for runtime PM "resume and get"
  and "get without resuming" operations.

  Specifics:

   - Make cpufreq drivers setting the default CPU transition latency to
     CPUFREQ_ETERNAL specify a proper default transition latency value
     instead which addresses a regression introduced during the 6.6
     cycle that broke CPUFREQ_ETERNAL handling (Rafael Wysocki)

   - Make the cpufreq CPPC driver use a proper transition delay value
     when CPUFREQ_ETERNAL is returned by cppc_get_transition_latency()
     to indicate an error condition (Rafael Wysocki)

   - Make cppc_get_transition_latency() return a negative error code to
     indicate error conditions instead of using CPUFREQ_ETERNAL for this
     purpose and drop CPUFREQ_ETERNAL that has no other users (Rafael
     Wysocki, Gopi Krishna Menon)

   - Fix device leak in the mediatek cpufreq driver (Johan Hovold)

   - Set target frequency on all CPUs sharing a policy during frequency
     updates in the tegra186 cpufreq driver and make it initialize all
     cores to max frequencies (Aaron Kling)

   - Rust cpufreq helper cleanup (Thorsten Blum)

   - Make pm_runtime_put*() family of functions return 1 when the given
     device is already suspended which is consistent with the
     documentation (Brian Norris)

   - Add basic kunit tests for runtime PM API contracts and update
     return values in kerneldoc comments for the runtime PM API (Brian
     Norris, Dan Carpenter)

   - Add auto-cleanup macros for runtime PM "resume and get" and "get
     without resume" operations, use one of them in the PCI core and
     drop the existing "free" macro introduced for similar purpose, but
     somewhat cumbersome to use (Rafael Wysocki)

   - Make the core power management code avoid waiting on device links
     marked as SYNC_STATE_ONLY which is consistent with the handling of
     those device links elsewhere (Pin-yen Lin)"

* tag 'pm-6.18-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  docs/zh_CN: Fix malformed table
  docs/zh_TW: Fix malformed table
  PM: runtime: Fix error checking for kunit_device_register()
  PM: runtime: Introduce one more usage counter guard
  cpufreq: Drop unused symbol CPUFREQ_ETERNAL
  ACPI: CPPC: Do not use CPUFREQ_ETERNAL as an error value
  cpufreq: CPPC: Avoid using CPUFREQ_ETERNAL as transition delay
  cpufreq: Make drivers using CPUFREQ_ETERNAL specify transition latency
  PM: runtime: Drop DEFINE_FREE() for pm_runtime_put()
  PCI/sysfs: Use runtime PM guard macro for auto-cleanup
  PM: runtime: Add auto-cleanup macros for "resume and get" operations
  cpufreq: tegra186: Initialize all cores to max frequencies
  cpufreq: tegra186: Set target frequency for all cpus in policy
  rust: cpufreq: streamline find_supply_names
  cpufreq: mediatek: fix device leak on probe failure
  PM: sleep: Do not wait on SYNC_STATE_ONLY device links
  PM: runtime: Update kerneldoc return codes
  PM: runtime: Make put{,_sync}() return 1 when already suspended
  PM: runtime: Add basic kunit tests for API contracts
This commit is contained in:
Linus Torvalds 2025-10-07 09:39:51 -07:00
commit abdf766d14
28 changed files with 426 additions and 101 deletions

View File

@ -274,10 +274,6 @@ are the following:
The time it takes to switch the CPUs belonging to this policy from one
P-state to another, in nanoseconds.
If unknown or if known to be so high that the scaling driver does not
work with the `ondemand`_ governor, -1 (:c:macro:`CPUFREQ_ETERNAL`)
will be returned by reads from this attribute.
``related_cpus``
List of all (online and offline) CPUs belonging to this policy.

View File

@ -109,8 +109,7 @@ Then, the driver must fill in the following values:
+-----------------------------------+--------------------------------------+
|policy->cpuinfo.transition_latency | the time it takes on this CPU to |
| | switch between two frequencies in |
| | nanoseconds (if appropriate, else |
| | specify CPUFREQ_ETERNAL) |
| | nanoseconds |
+-----------------------------------+--------------------------------------+
|policy->cur | The current operating frequency of |
| | this CPU (if appropriate) |

View File

@ -112,8 +112,7 @@ CPUfreq核心层注册一个cpufreq_driver结构体。
| | |
+-----------------------------------+--------------------------------------+
|policy->cpuinfo.transition_latency | CPU在两个频率之间切换所需的时间以 |
| | 纳秒为单位(如不适用,设定为 |
| | CPUFREQ_ETERNAL |
| | 纳秒为单位 |
| | |
+-----------------------------------+--------------------------------------+
|policy->cur | 该CPU当前的工作频率(如适用) |

View File

@ -112,8 +112,7 @@ CPUfreq核心層註冊一個cpufreq_driver結構體。
| | |
+-----------------------------------+--------------------------------------+
|policy->cpuinfo.transition_latency | CPU在兩個頻率之間切換所需的時間以 |
| | 納秒爲單位(如不適用,設定爲 |
| | CPUFREQ_ETERNAL |
| | 納秒爲單位 |
| | |
+-----------------------------------+--------------------------------------+
|policy->cur | 該CPU當前的工作頻率(如適用) |

View File

@ -1876,7 +1876,7 @@ EXPORT_SYMBOL_GPL(cppc_set_perf);
* If desired_reg is in the SystemMemory or SystemIo ACPI address space,
* then assume there is no latency.
*/
unsigned int cppc_get_transition_latency(int cpu_num)
int cppc_get_transition_latency(int cpu_num)
{
/*
* Expected transition latency is based on the PCCT timing values
@ -1889,31 +1889,29 @@ unsigned int cppc_get_transition_latency(int cpu_num)
* completion of a command before issuing the next command,
* in microseconds.
*/
unsigned int latency_ns = 0;
struct cpc_desc *cpc_desc;
struct cpc_register_resource *desired_reg;
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpu_num);
struct cppc_pcc_data *pcc_ss_data;
int latency_ns = 0;
cpc_desc = per_cpu(cpc_desc_ptr, cpu_num);
if (!cpc_desc)
return CPUFREQ_ETERNAL;
return -ENODATA;
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
if (CPC_IN_SYSTEM_MEMORY(desired_reg) || CPC_IN_SYSTEM_IO(desired_reg))
return 0;
else if (!CPC_IN_PCC(desired_reg))
return CPUFREQ_ETERNAL;
if (pcc_ss_id < 0)
return CPUFREQ_ETERNAL;
if (!CPC_IN_PCC(desired_reg) || pcc_ss_id < 0)
return -ENODATA;
pcc_ss_data = pcc_data[pcc_ss_id];
if (pcc_ss_data->pcc_mpar)
latency_ns = 60 * (1000 * 1000 * 1000 / pcc_ss_data->pcc_mpar);
latency_ns = max(latency_ns, pcc_ss_data->pcc_nominal * 1000);
latency_ns = max(latency_ns, pcc_ss_data->pcc_mrtt * 1000);
latency_ns = max_t(int, latency_ns, pcc_ss_data->pcc_nominal * 1000);
latency_ns = max_t(int, latency_ns, pcc_ss_data->pcc_mrtt * 1000);
return latency_ns;
}

View File

@ -167,6 +167,12 @@ config PM_QOS_KUNIT_TEST
depends on KUNIT=y
default KUNIT_ALL_TESTS
config PM_RUNTIME_KUNIT_TEST
tristate "KUnit Tests for runtime PM" if !KUNIT_ALL_TESTS
depends on KUNIT
depends on PM
default KUNIT_ALL_TESTS
config HMEM_REPORTING
bool
default n

View File

@ -248,6 +248,7 @@ void device_links_driver_cleanup(struct device *dev);
void device_links_no_driver(struct device *dev);
bool device_links_busy(struct device *dev);
void device_links_unbind_consumers(struct device *dev);
bool device_link_flag_is_sync_state_only(u32 flags);
void fw_devlink_drivers_done(void);
void fw_devlink_probing_done(void);

View File

@ -287,7 +287,7 @@ static bool device_is_ancestor(struct device *dev, struct device *target)
#define DL_MARKER_FLAGS (DL_FLAG_INFERRED | \
DL_FLAG_CYCLE | \
DL_FLAG_MANAGED)
static inline bool device_link_flag_is_sync_state_only(u32 flags)
bool device_link_flag_is_sync_state_only(u32 flags)
{
return (flags & ~DL_MARKER_FLAGS) == DL_FLAG_SYNC_STATE_ONLY;
}

View File

@ -4,5 +4,6 @@ obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o wakeup_stats.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o
obj-$(CONFIG_PM_QOS_KUNIT_TEST) += qos-test.o
obj-$(CONFIG_PM_RUNTIME_KUNIT_TEST) += runtime-test.o
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG

View File

@ -278,7 +278,8 @@ static void dpm_wait_for_suppliers(struct device *dev, bool async)
* walking.
*/
dev_for_each_link_to_supplier(link, dev)
if (READ_ONCE(link->status) != DL_STATE_DORMANT)
if (READ_ONCE(link->status) != DL_STATE_DORMANT &&
!device_link_flag_is_sync_state_only(link->flags))
dpm_wait(link->supplier, async);
device_links_read_unlock(idx);
@ -335,7 +336,8 @@ static void dpm_wait_for_consumers(struct device *dev, bool async)
* unregistration).
*/
dev_for_each_link_to_consumer(link, dev)
if (READ_ONCE(link->status) != DL_STATE_DORMANT)
if (READ_ONCE(link->status) != DL_STATE_DORMANT &&
!device_link_flag_is_sync_state_only(link->flags))
dpm_wait(link->consumer, async);
device_links_read_unlock(idx);

View File

@ -0,0 +1,253 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright 2025 Google, Inc.
*/
#include <linux/cleanup.h>
#include <linux/pm_runtime.h>
#include <kunit/device.h>
#include <kunit/test.h>
#define DEVICE_NAME "pm_runtime_test_device"
static void pm_runtime_depth_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
pm_runtime_enable(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
KUNIT_EXPECT_EQ(test, 1, pm_runtime_get_sync(dev)); /* "already active" */
KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
}
/* Test pm_runtime_put() and friends when already suspended. */
static void pm_runtime_already_suspended_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
pm_runtime_enable(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
pm_runtime_get_noresume(dev);
KUNIT_EXPECT_EQ(test, 0, pm_runtime_barrier(dev)); /* no wakeup needed */
pm_runtime_put(dev);
pm_runtime_get_noresume(dev);
KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync(dev));
KUNIT_EXPECT_EQ(test, 1, pm_runtime_suspend(dev));
KUNIT_EXPECT_EQ(test, 1, pm_runtime_autosuspend(dev));
KUNIT_EXPECT_EQ(test, 1, pm_request_autosuspend(dev));
pm_runtime_get_noresume(dev);
KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync_autosuspend(dev));
pm_runtime_get_noresume(dev);
pm_runtime_put_autosuspend(dev);
/* Grab 2 refcounts */
pm_runtime_get_noresume(dev);
pm_runtime_get_noresume(dev);
/* The first put() sees usage_count 1 */
KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync_autosuspend(dev));
/* The second put() sees usage_count 0 but tells us "already suspended". */
KUNIT_EXPECT_EQ(test, 1, pm_runtime_put_sync_autosuspend(dev));
/* Should have remained suspended the whole time. */
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
}
static void pm_runtime_idle_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
pm_runtime_enable(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
pm_runtime_put_noidle(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_idle(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_idle(dev));
}
static void pm_runtime_disabled_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
/* Never called pm_runtime_enable() */
KUNIT_EXPECT_FALSE(test, pm_runtime_enabled(dev));
/* "disabled" is treated as "active" */
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
KUNIT_EXPECT_FALSE(test, pm_runtime_suspended(dev));
/*
* Note: these "fail", but they still acquire/release refcounts, so
* keep them balanced.
*/
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get(dev));
pm_runtime_put(dev);
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get_sync(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_put_sync(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_get(dev));
pm_runtime_put_autosuspend(dev);
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_resume_and_get(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_idle(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_request_idle(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_request_resume(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_request_autosuspend(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_suspend(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_resume(dev));
KUNIT_EXPECT_EQ(test, -EACCES, pm_runtime_autosuspend(dev));
/* Still disabled */
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
KUNIT_EXPECT_FALSE(test, pm_runtime_enabled(dev));
}
static void pm_runtime_error_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
pm_runtime_enable(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
/* Fake a .runtime_resume() error */
dev->power.runtime_error = -EIO;
/*
* Note: these "fail", but they still acquire/release refcounts, so
* keep them balanced.
*/
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev));
pm_runtime_put(dev);
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get_sync(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_put_sync(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev));
pm_runtime_put_autosuspend(dev);
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_get(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_put_sync_autosuspend(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_resume_and_get(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_idle(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_idle(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_resume(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_request_autosuspend(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_suspend(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_resume(dev));
KUNIT_EXPECT_EQ(test, -EINVAL, pm_runtime_autosuspend(dev));
/* Error is still pending */
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, -EIO, dev->power.runtime_error);
/* Clear error */
KUNIT_EXPECT_EQ(test, 0, pm_runtime_set_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, dev->power.runtime_error);
/* Still suspended */
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_get(dev));
KUNIT_EXPECT_EQ(test, 1, pm_runtime_barrier(dev)); /* resume was pending */
pm_runtime_put(dev);
pm_runtime_suspend(dev); /* flush the put(), to suspend */
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_get_sync(dev));
pm_runtime_put_autosuspend(dev);
KUNIT_EXPECT_EQ(test, 0, pm_runtime_resume_and_get(dev));
/*
* The following should all return -EAGAIN (usage is non-zero) or 1
* (already resumed).
*/
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_idle(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_idle(dev));
KUNIT_EXPECT_EQ(test, 1, pm_request_resume(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_request_autosuspend(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_suspend(dev));
KUNIT_EXPECT_EQ(test, 1, pm_runtime_resume(dev));
KUNIT_EXPECT_EQ(test, -EAGAIN, pm_runtime_autosuspend(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_put_sync(dev));
/* Suspended again */
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
}
/*
* Explore a typical probe() sequence in which a device marks itself powered,
* but doesn't hold any runtime PM reference, so it suspends as soon as it goes
* idle.
*/
static void pm_runtime_probe_active_test(struct kunit *test)
{
struct device *dev = kunit_device_register(test, DEVICE_NAME);
KUNIT_ASSERT_NOT_ERR_OR_NULL(test, dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_status_suspended(dev));
KUNIT_EXPECT_EQ(test, 0, pm_runtime_set_active(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
pm_runtime_enable(dev);
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
/* Nothing to flush. We stay active. */
KUNIT_EXPECT_EQ(test, 0, pm_runtime_barrier(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_active(dev));
/* Ask for idle? Now we suspend. */
KUNIT_EXPECT_EQ(test, 0, pm_runtime_idle(dev));
KUNIT_EXPECT_TRUE(test, pm_runtime_suspended(dev));
}
static struct kunit_case pm_runtime_test_cases[] = {
KUNIT_CASE(pm_runtime_depth_test),
KUNIT_CASE(pm_runtime_already_suspended_test),
KUNIT_CASE(pm_runtime_idle_test),
KUNIT_CASE(pm_runtime_disabled_test),
KUNIT_CASE(pm_runtime_error_test),
KUNIT_CASE(pm_runtime_probe_active_test),
{}
};
static struct kunit_suite pm_runtime_test_suite = {
.name = "pm_runtime_test_cases",
.test_cases = pm_runtime_test_cases,
};
kunit_test_suite(pm_runtime_test_suite);
MODULE_DESCRIPTION("Runtime power management unit test suite");
MODULE_LICENSE("GPL");

View File

@ -498,6 +498,9 @@ static int rpm_idle(struct device *dev, int rpmflags)
if (retval < 0)
; /* Conditions are wrong. */
else if ((rpmflags & RPM_GET_PUT) && retval == 1)
; /* put() is allowed in RPM_SUSPENDED */
/* Idle notifications are allowed only in the RPM_ACTIVE state. */
else if (dev->power.runtime_status != RPM_ACTIVE)
retval = -EAGAIN;
@ -796,6 +799,8 @@ static int rpm_resume(struct device *dev, int rpmflags)
if (dev->power.runtime_status == RPM_ACTIVE &&
dev->power.last_status == RPM_ACTIVE)
retval = 1;
else if (rpmflags & RPM_TRANSPARENT)
goto out;
else
retval = -EACCES;
}

View File

@ -872,10 +872,10 @@ static void amd_pstate_update_limits(struct cpufreq_policy *policy)
*/
static u32 amd_pstate_get_transition_delay_us(unsigned int cpu)
{
u32 transition_delay_ns;
int transition_delay_ns;
transition_delay_ns = cppc_get_transition_latency(cpu);
if (transition_delay_ns == CPUFREQ_ETERNAL) {
if (transition_delay_ns < 0) {
if (cpu_feature_enabled(X86_FEATURE_AMD_FAST_CPPC))
return AMD_PSTATE_FAST_CPPC_TRANSITION_DELAY;
else
@ -891,10 +891,10 @@ static u32 amd_pstate_get_transition_delay_us(unsigned int cpu)
*/
static u32 amd_pstate_get_transition_latency(unsigned int cpu)
{
u32 transition_latency;
int transition_latency;
transition_latency = cppc_get_transition_latency(cpu);
if (transition_latency == CPUFREQ_ETERNAL)
if (transition_latency < 0)
return AMD_PSTATE_TRANSITION_LATENCY;
return transition_latency;

View File

@ -308,6 +308,16 @@ static int cppc_verify_policy(struct cpufreq_policy_data *policy)
return 0;
}
static unsigned int __cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
{
int transition_latency_ns = cppc_get_transition_latency(cpu);
if (transition_latency_ns < 0)
return CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS / NSEC_PER_USEC;
return transition_latency_ns / NSEC_PER_USEC;
}
/*
* The PCC subspace describes the rate at which platform can accept commands
* on the shared PCC channel (including READs which do not count towards freq
@ -330,12 +340,12 @@ static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
return 10000;
}
}
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
return __cppc_cpufreq_get_transition_delay_us(cpu);
}
#else
static unsigned int cppc_cpufreq_get_transition_delay_us(unsigned int cpu)
{
return cppc_get_transition_latency(cpu) / NSEC_PER_USEC;
return __cppc_cpufreq_get_transition_delay_us(cpu);
}
#endif

View File

@ -104,7 +104,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
transition_latency = dev_pm_opp_get_max_transition_latency(cpu_dev);
if (!transition_latency)
transition_latency = CPUFREQ_ETERNAL;
transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
cpumask_copy(policy->cpus, priv->cpus);
policy->driver_data = priv;

View File

@ -442,7 +442,7 @@ soc_opp_out:
}
if (of_property_read_u32(np, "clock-latency", &transition_latency))
transition_latency = CPUFREQ_ETERNAL;
transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
/*
* Calculate the ramp time for max voltage change in the

View File

@ -309,7 +309,7 @@ static int mtk_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
latency = readl_relaxed(data->reg_bases[REG_FREQ_LATENCY]) * 1000;
if (!latency)
latency = CPUFREQ_ETERNAL;
latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
policy->cpuinfo.transition_latency = latency;
policy->fast_switch_possible = true;

View File

@ -403,9 +403,11 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
}
info->cpu_clk = clk_get(cpu_dev, "cpu");
if (IS_ERR(info->cpu_clk))
return dev_err_probe(cpu_dev, PTR_ERR(info->cpu_clk),
"cpu%d: failed to get cpu clk\n", cpu);
if (IS_ERR(info->cpu_clk)) {
ret = PTR_ERR(info->cpu_clk);
dev_err_probe(cpu_dev, ret, "cpu%d: failed to get cpu clk\n", cpu);
goto out_put_cci_dev;
}
info->inter_clk = clk_get(cpu_dev, "intermediate");
if (IS_ERR(info->inter_clk)) {
@ -551,6 +553,10 @@ out_free_inter_clock:
out_free_mux_clock:
clk_put(info->cpu_clk);
out_put_cci_dev:
if (info->soc_data->ccifreq_supported)
put_device(info->cci_dev);
return ret;
}
@ -568,6 +574,8 @@ static void mtk_cpu_dvfs_info_release(struct mtk_cpu_dvfs_info *info)
clk_put(info->inter_clk);
dev_pm_opp_of_cpumask_remove_table(&info->cpus);
dev_pm_opp_unregister_notifier(info->cpu_dev, &info->opp_nb);
if (info->soc_data->ccifreq_supported)
put_device(info->cci_dev);
}
static int mtk_cpufreq_init(struct cpufreq_policy *policy)

View File

@ -28,15 +28,11 @@ fn find_supply_name_exact(dev: &Device, name: &str) -> Option<CString> {
/// Finds supply name for the CPU from DT.
fn find_supply_names(dev: &Device, cpu: cpu::CpuId) -> Option<KVec<CString>> {
// Try "cpu0" for older DTs, fallback to "cpu".
let name = (cpu.as_u32() == 0)
(cpu.as_u32() == 0)
.then(|| find_supply_name_exact(dev, "cpu0"))
.flatten()
.or_else(|| find_supply_name_exact(dev, "cpu"))?;
let mut list = KVec::with_capacity(1, GFP_KERNEL).ok()?;
list.push(name, GFP_KERNEL).ok()?;
Some(list)
.or_else(|| find_supply_name_exact(dev, "cpu"))
.and_then(|name| kernel::kvec![name].ok())
}
/// Represents the cpufreq dt device.
@ -123,7 +119,7 @@ impl cpufreq::Driver for CPUFreqDTDriver {
let mut transition_latency = opp_table.max_transition_latency_ns() as u32;
if transition_latency == 0 {
transition_latency = cpufreq::ETERNAL_LATENCY_NS;
transition_latency = cpufreq::DEFAULT_TRANSITION_LATENCY_NS;
}
policy

View File

@ -294,7 +294,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
latency = perf_ops->transition_latency_get(ph, domain);
if (!latency)
latency = CPUFREQ_ETERNAL;
latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
policy->cpuinfo.transition_latency = latency;

View File

@ -157,7 +157,7 @@ static int scpi_cpufreq_init(struct cpufreq_policy *policy)
latency = scpi_ops->get_transition_latency(cpu_dev);
if (!latency)
latency = CPUFREQ_ETERNAL;
latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
policy->cpuinfo.transition_latency = latency;

View File

@ -182,7 +182,7 @@ static int spear_cpufreq_probe(struct platform_device *pdev)
if (of_property_read_u32(np, "clock-latency",
&spear_cpufreq.transition_latency))
spear_cpufreq.transition_latency = CPUFREQ_ETERNAL;
spear_cpufreq.transition_latency = CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
cnt = of_property_count_u32_elems(np, "cpufreq_tbl");
if (cnt <= 0) {

View File

@ -93,10 +93,14 @@ static int tegra186_cpufreq_set_target(struct cpufreq_policy *policy,
{
struct tegra186_cpufreq_data *data = cpufreq_get_driver_data();
struct cpufreq_frequency_table *tbl = policy->freq_table + index;
unsigned int edvd_offset = data->cpus[policy->cpu].edvd_offset;
unsigned int edvd_offset;
u32 edvd_val = tbl->driver_data;
u32 cpu;
writel(edvd_val, data->regs + edvd_offset);
for_each_cpu(cpu, policy->cpus) {
edvd_offset = data->cpus[cpu].edvd_offset;
writel(edvd_val, data->regs + edvd_offset);
}
return 0;
}
@ -132,13 +136,14 @@ static struct cpufreq_driver tegra186_cpufreq_driver = {
static struct cpufreq_frequency_table *init_vhint_table(
struct platform_device *pdev, struct tegra_bpmp *bpmp,
struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id)
struct tegra186_cpufreq_cluster *cluster, unsigned int cluster_id,
int *num_rates)
{
struct cpufreq_frequency_table *table;
struct mrq_cpu_vhint_request req;
struct tegra_bpmp_message msg;
struct cpu_vhint_data *data;
int err, i, j, num_rates = 0;
int err, i, j;
dma_addr_t phys;
void *virt;
@ -168,6 +173,7 @@ static struct cpufreq_frequency_table *init_vhint_table(
goto free;
}
*num_rates = 0;
for (i = data->vfloor; i <= data->vceil; i++) {
u16 ndiv = data->ndiv[i];
@ -178,10 +184,10 @@ static struct cpufreq_frequency_table *init_vhint_table(
if (i > 0 && ndiv == data->ndiv[i - 1])
continue;
num_rates++;
(*num_rates)++;
}
table = devm_kcalloc(&pdev->dev, num_rates + 1, sizeof(*table),
table = devm_kcalloc(&pdev->dev, *num_rates + 1, sizeof(*table),
GFP_KERNEL);
if (!table) {
table = ERR_PTR(-ENOMEM);
@ -223,7 +229,9 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
{
struct tegra186_cpufreq_data *data;
struct tegra_bpmp *bpmp;
unsigned int i = 0, err;
unsigned int i = 0, err, edvd_offset;
int num_rates = 0;
u32 edvd_val, cpu;
data = devm_kzalloc(&pdev->dev,
struct_size(data, clusters, TEGRA186_NUM_CLUSTERS),
@ -246,10 +254,21 @@ static int tegra186_cpufreq_probe(struct platform_device *pdev)
for (i = 0; i < TEGRA186_NUM_CLUSTERS; i++) {
struct tegra186_cpufreq_cluster *cluster = &data->clusters[i];
cluster->table = init_vhint_table(pdev, bpmp, cluster, i);
cluster->table = init_vhint_table(pdev, bpmp, cluster, i, &num_rates);
if (IS_ERR(cluster->table)) {
err = PTR_ERR(cluster->table);
goto put_bpmp;
} else if (!num_rates) {
err = -EINVAL;
goto put_bpmp;
}
for (cpu = 0; cpu < ARRAY_SIZE(tegra186_cpus); cpu++) {
if (data->cpus[cpu].bpmp_cluster_id == i) {
edvd_val = cluster->table[num_rates - 1].driver_data;
edvd_offset = data->cpus[cpu].edvd_offset;
writel(edvd_val, data->regs + edvd_offset);
}
}
}

View File

@ -1517,8 +1517,9 @@ static ssize_t reset_method_store(struct device *dev,
return count;
}
pm_runtime_get_sync(dev);
struct device *pmdev __free(pm_runtime_put) = dev;
ACQUIRE(pm_runtime_active_try, pm)(dev);
if (ACQUIRE_ERR(pm_runtime_active_try, &pm))
return -ENXIO;
if (sysfs_streq(buf, "default")) {
pci_init_reset_methods(pdev);

View File

@ -160,7 +160,7 @@ extern unsigned int cppc_khz_to_perf(struct cppc_perf_caps *caps, unsigned int f
extern bool acpi_cpc_valid(void);
extern bool cppc_allow_fast_switch(void);
extern int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data);
extern unsigned int cppc_get_transition_latency(int cpu);
extern int cppc_get_transition_latency(int cpu);
extern bool cpc_ffh_supported(void);
extern bool cpc_supported_by_cpu(void);
extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val);
@ -216,9 +216,9 @@ static inline bool cppc_allow_fast_switch(void)
{
return false;
}
static inline unsigned int cppc_get_transition_latency(int cpu)
static inline int cppc_get_transition_latency(int cpu)
{
return CPUFREQ_ETERNAL;
return -ENODATA;
}
static inline bool cpc_ffh_supported(void)
{

View File

@ -26,12 +26,10 @@
*********************************************************************/
/*
* Frequency values here are CPU kHz
*
* Maximum transition latency is in nanoseconds - if it's unknown,
* CPUFREQ_ETERNAL shall be used.
*/
#define CPUFREQ_ETERNAL (-1)
#define CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS NSEC_PER_MSEC
#define CPUFREQ_NAME_LEN 16
/* Print length for names. Extra 1 space for accommodating '\n' in prints */
#define CPUFREQ_NAME_PLEN (CPUFREQ_NAME_LEN + 1)

View File

@ -21,6 +21,7 @@
#define RPM_GET_PUT 0x04 /* Increment/decrement the
usage_count */
#define RPM_AUTO 0x08 /* Use autosuspend_delay */
#define RPM_TRANSPARENT 0x10 /* Succeed if runtime PM is disabled */
/*
* Use this for defining a set of PM operations to be used in all situations
@ -350,13 +351,12 @@ static inline int pm_runtime_force_resume(struct device *dev) { return -ENXIO; }
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero, Runtime PM status change ongoing
* or device not in %RPM_ACTIVE state.
* * -EAGAIN: Runtime PM usage counter non-zero, Runtime PM status change
* ongoing or device not in %RPM_ACTIVE state.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM idle and suspend callbacks.
*/
@ -370,14 +370,15 @@ static inline int pm_runtime_idle(struct device *dev)
* @dev: Target device.
*
* Return:
* * 1: Success; device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change
* ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM suspend callbacks.
*/
@ -396,14 +397,15 @@ static inline int pm_runtime_suspend(struct device *dev)
* engaging its "idle check" callback.
*
* Return:
* * 1: Success; device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change
* ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM suspend callbacks.
*/
@ -433,13 +435,12 @@ static inline int pm_runtime_resume(struct device *dev)
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero, Runtime PM status change ongoing
* or device not in %RPM_ACTIVE state.
* * -EAGAIN: Runtime PM usage counter non-zero, Runtime PM status change
* ongoing or device not in %RPM_ACTIVE state.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
*/
static inline int pm_request_idle(struct device *dev)
{
@ -464,15 +465,16 @@ static inline int pm_request_resume(struct device *dev)
* equivalent pm_runtime_autosuspend() for @dev asynchronously.
*
* Return:
* * 1: Success; device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter non-zero or Runtime PM status change
* ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
*/
static inline int pm_request_autosuspend(struct device *dev)
{
@ -511,6 +513,19 @@ static inline int pm_runtime_get_sync(struct device *dev)
return __pm_runtime_resume(dev, RPM_GET_PUT);
}
static inline int pm_runtime_get_active(struct device *dev, int rpmflags)
{
int ret;
ret = __pm_runtime_resume(dev, RPM_GET_PUT | rpmflags);
if (ret < 0) {
pm_runtime_put_noidle(dev);
return ret;
}
return 0;
}
/**
* pm_runtime_resume_and_get - Bump up usage counter of a device and resume it.
* @dev: Target device.
@ -521,15 +536,7 @@ static inline int pm_runtime_get_sync(struct device *dev)
*/
static inline int pm_runtime_resume_and_get(struct device *dev)
{
int ret;
ret = __pm_runtime_resume(dev, RPM_GET_PUT);
if (ret < 0) {
pm_runtime_put_noidle(dev);
return ret;
}
return 0;
return pm_runtime_get_active(dev, 0);
}
/**
@ -540,23 +547,22 @@ static inline int pm_runtime_resume_and_get(struct device *dev)
* equal to 0, queue up a work item for @dev like in pm_request_idle().
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
*/
static inline int pm_runtime_put(struct device *dev)
{
return __pm_runtime_idle(dev, RPM_GET_PUT | RPM_ASYNC);
}
DEFINE_FREE(pm_runtime_put, struct device *, if (_T) pm_runtime_put(_T))
/**
* __pm_runtime_put_autosuspend - Drop device usage counter and queue autosuspend if 0.
* @dev: Target device.
@ -565,15 +571,16 @@ DEFINE_FREE(pm_runtime_put, struct device *, if (_T) pm_runtime_put(_T))
* equal to 0, queue up a work item for @dev like in pm_request_autosuspend().
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
*/
static inline int __pm_runtime_put_autosuspend(struct device *dev)
{
@ -590,15 +597,16 @@ static inline int __pm_runtime_put_autosuspend(struct device *dev)
* in pm_request_autosuspend().
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
*/
static inline int pm_runtime_put_autosuspend(struct device *dev)
{
@ -606,6 +614,29 @@ static inline int pm_runtime_put_autosuspend(struct device *dev)
return __pm_runtime_put_autosuspend(dev);
}
DEFINE_GUARD(pm_runtime_noresume, struct device *,
pm_runtime_get_noresume(_T), pm_runtime_put_noidle(_T));
DEFINE_GUARD(pm_runtime_active, struct device *,
pm_runtime_get_sync(_T), pm_runtime_put(_T));
DEFINE_GUARD(pm_runtime_active_auto, struct device *,
pm_runtime_get_sync(_T), pm_runtime_put_autosuspend(_T));
/*
* Use the following guards with ACQUIRE()/ACQUIRE_ERR().
*
* The difference between the "_try" and "_try_enabled" variants is that the
* former do not produce an error when runtime PM is disabled for the given
* device.
*/
DEFINE_GUARD_COND(pm_runtime_active, _try,
pm_runtime_get_active(_T, RPM_TRANSPARENT))
DEFINE_GUARD_COND(pm_runtime_active, _try_enabled,
pm_runtime_resume_and_get(_T))
DEFINE_GUARD_COND(pm_runtime_active_auto, _try,
pm_runtime_get_active(_T, RPM_TRANSPARENT))
DEFINE_GUARD_COND(pm_runtime_active_auto, _try_enabled,
pm_runtime_resume_and_get(_T))
/**
* pm_runtime_put_sync - Drop device usage counter and run "idle check" if 0.
* @dev: Target device.
@ -619,14 +650,15 @@ static inline int pm_runtime_put_autosuspend(struct device *dev)
* if it returns an error code.
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM suspend callbacks.
*/
@ -646,15 +678,15 @@ static inline int pm_runtime_put_sync(struct device *dev)
* if it returns an error code.
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM suspend callbacks.
*/
@ -677,15 +709,16 @@ static inline int pm_runtime_put_sync_suspend(struct device *dev)
* if it returns an error code.
*
* Return:
* * 1: Success. Usage counter dropped to zero, but device was already suspended.
* * 0: Success.
* * -EINVAL: Runtime PM error.
* * -EACCES: Runtime PM disabled.
* * -EAGAIN: Runtime PM usage_count non-zero or Runtime PM status change ongoing.
* * -EAGAIN: Runtime PM usage counter became non-zero or Runtime PM status
* change ongoing.
* * -EBUSY: Runtime PM child_count non-zero.
* * -EPERM: Device PM QoS resume latency 0.
* * -EINPROGRESS: Suspend already in progress.
* * -ENOSYS: CONFIG_PM not enabled.
* * 1: Device already suspended.
* Other values and conditions for the above values are possible as returned by
* Runtime PM suspend callbacks.
*/

View File

@ -38,7 +38,8 @@ use macros::vtable;
const CPUFREQ_NAME_LEN: usize = bindings::CPUFREQ_NAME_LEN as usize;
/// Default transition latency value in nanoseconds.
pub const ETERNAL_LATENCY_NS: u32 = bindings::CPUFREQ_ETERNAL as u32;
pub const DEFAULT_TRANSITION_LATENCY_NS: u32 =
bindings::CPUFREQ_DEFAULT_TRANSITION_LATENCY_NS;
/// CPU frequency driver flags.
pub mod flags {
@ -399,13 +400,13 @@ impl TableBuilder {
/// The following example demonstrates how to create a CPU frequency table.
///
/// ```
/// use kernel::cpufreq::{ETERNAL_LATENCY_NS, Policy};
/// use kernel::cpufreq::{DEFAULT_TRANSITION_LATENCY_NS, Policy};
///
/// fn update_policy(policy: &mut Policy) {
/// policy
/// .set_dvfs_possible_from_any_cpu(true)
/// .set_fast_switch_possible(true)
/// .set_transition_latency_ns(ETERNAL_LATENCY_NS);
/// .set_transition_latency_ns(DEFAULT_TRANSITION_LATENCY_NS);
///
/// pr_info!("The policy details are: {:?}\n", (policy.cpu(), policy.cur()));
/// }