Merge tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM updates from Andrew Morton:

 - "powerpc/64s: do not re-activate batched TLB flush" makes
   arch_{enter|leave}_lazy_mmu_mode() nest properly (Alexander Gordeev)

   It adds a generic enter/leave layer and switches architectures to use
   it. Various hacks were removed in the process.

 - "zram: introduce compressed data writeback" implements data
   compression for zram writeback (Richard Chang and Sergey Senozhatsky)

 - "mm: folio_zero_user: clear page ranges" adds clearing of contiguous
   page ranges for hugepages. Large improvements during demand faulting
   are demonstrated (David Hildenbrand)

 - "memcg cleanups" tidies up some memcg code (Chen Ridong)

 - "mm/damon: introduce {,max_}nr_snapshots and tracepoint for damos
   stats" improves DAMOS stat's provided information, deterministic
   control, and readability (SeongJae Park)

 - "selftests/mm: hugetlb cgroup charging: robustness fixes" fixes a few
   issues in the hugetlb cgroup charging selftests (Li Wang)

 - "Fix va_high_addr_switch.sh test failure - again" addresses several
   issues in the va_high_addr_switch test (Chunyu Hu)

 - "mm/damon/tests/core-kunit: extend existing test scenarios" improves
   the KUnit test coverage for DAMON (Shu Anzai)

 - "mm/khugepaged: fix dirty page handling for MADV_COLLAPSE" fixes a
   glitch in khugepaged which was causing madvise(MADV_COLLAPSE) to
   transiently return -EAGAIN (Shivank Garg)

 - "arch, mm: consolidate hugetlb early reservation" reworks and
   consolidates a pile of straggly code related to reservation of
   hugetlb memory from bootmem and creation of CMA areas for hugetlb
   (Mike Rapoport)

 - "mm: clean up anon_vma implementation" cleans up the anon_vma
   implementation in various ways (Lorenzo Stoakes)

 - "tweaks for __alloc_pages_slowpath()" does a little streamlining of
   the page allocator's slowpath code (Vlastimil Babka)

 - "memcg: separate private and public ID namespaces" cleans up the
   memcg ID code and prevents the internal-only private IDs from being
   exposed to userspace (Shakeel Butt)

 - "mm: hugetlb: allocate frozen gigantic folio" cleans up the
   allocation of frozen folios and avoids some atomic refcount
   operations (Kefeng Wang)

 - "mm/damon: advance DAMOS-based LRU sorting" improves DAMOS's movement
   of memory betewwn the active and inactive LRUs and adds auto-tuning
   of the ratio-based quotas and of monitoring intervals (SeongJae Park)

 - "Support page table check on PowerPC" makes
   CONFIG_PAGE_TABLE_CHECK_ENFORCED work on powerpc (Andrew Donnellan)

 - "nodemask: align nodes_and{,not} with underlying bitmap ops" makes
   nodes_and() and nodes_andnot() propagate the return values from the
   underlying bit operations, enabling some cleanup in calling code
   (Yury Norov)

 - "mm/damon: hide kdamond and kdamond_lock from API callers" cleans up
   some DAMON internal interfaces (SeongJae Park)

 - "mm/khugepaged: cleanups and scan limit fix" does some cleanup work
   in khupaged and fixes a scan limit accounting issue (Shivank Garg)

 - "mm: balloon infrastructure cleanups" goes to town on the balloon
   infrastructure and its page migration function. Mainly cleanups, also
   some locking simplification (David Hildenbrand)

 - "mm/vmscan: add tracepoint and reason for kswapd_failures reset" adds
   additional tracepoints to the page reclaim code (Jiayuan Chen)

 - "Replace wq users and add WQ_PERCPU to alloc_workqueue() users" is
   part of Marco's kernel-wide migration from the legacy workqueue APIs
   over to the preferred unbound workqueues (Marco Crivellari)

 - "Various mm kselftests improvements/fixes" provides various unrelated
   improvements/fixes for the mm kselftests (Kevin Brodsky)

 - "mm: accelerate gigantic folio allocation" greatly speeds up gigantic
   folio allocation, mainly by avoiding unnecessary work in
   pfn_range_valid_contig() (Kefeng Wang)

 - "selftests/damon: improve leak detection and wss estimation
   reliability" improves the reliability of two of the DAMON selftests
   (SeongJae Park)

 - "mm/damon: cleanup kdamond, damon_call(), damos filter and
   DAMON_MIN_REGION" does some cleanup work in the core DAMON code
   (SeongJae Park)

 - "Docs/mm/damon: update intro, modules, maintainer profile, and misc"
   performs maintenance work on the DAMON documentation (SeongJae Park)

 - "mm: add and use vma_assert_stabilised() helper" refactors and cleans
   up the core VMA code. The main aim here is to be able to use the mmap
   write lock's lockdep state to perform various assertions regarding
   the locking which the VMA code requires (Lorenzo Stoakes)

 - "mm, swap: swap table phase II: unify swapin use" removes some old
   swap code (swap cache bypassing and swap synchronization) which
   wasn't working very well. Various other cleanups and simplifications
   were made. The end result is a 20% speedup in one benchmark (Kairui
   Song)

 - "enable PT_RECLAIM on more 64-bit architectures" makes PT_RECLAIM
   available on 64-bit alpha, loongarch, mips, parisc, and um. Various
   cleanups were performed along the way (Qi Zheng)

* tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (325 commits)
  mm/memory: handle non-split locks correctly in zap_empty_pte_table()
  mm: move pte table reclaim code to memory.c
  mm: make PT_RECLAIM depends on MMU_GATHER_RCU_TABLE_FREE
  mm: convert __HAVE_ARCH_TLB_REMOVE_TABLE to CONFIG_HAVE_ARCH_TLB_REMOVE_TABLE config
  um: mm: enable MMU_GATHER_RCU_TABLE_FREE
  parisc: mm: enable MMU_GATHER_RCU_TABLE_FREE
  mips: mm: enable MMU_GATHER_RCU_TABLE_FREE
  LoongArch: mm: enable MMU_GATHER_RCU_TABLE_FREE
  alpha: mm: enable MMU_GATHER_RCU_TABLE_FREE
  mm: change mm/pt_reclaim.c to use asm/tlb.h instead of asm-generic/tlb.h
  mm/damon/stat: remove __read_mostly from memory_idle_ms_percentiles
  zsmalloc: make common caches global
  mm: add SPDX id lines to some mm source files
  mm/zswap: use %pe to print error pointers
  mm/vmscan: use %pe to print error pointers
  mm/readahead: fix typo in comment
  mm: khugepaged: fix NR_FILE_PAGES and NR_SHMEM in collapse_file()
  mm: refactor vma_map_pages to use vm_insert_pages
  mm/damon: unify address range representation with damon_addr_range
  mm/cma: replace snprintf with strscpy in cma_new_area
  ...
This commit is contained in:
Linus Torvalds
2026-02-12 11:32:37 -08:00
332 changed files with 6257 additions and 5611 deletions

View File

@@ -1,6 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* linux/mm/page_alloc.c
*
* Manages the free list, the system allocates free pages here.
* Note that kmalloc() lives in slab.c
@@ -1853,7 +1852,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
/*
* As memory initialization might be integrated into KASAN,
* KASAN unpoisoning and memory initializion code must be
* KASAN unpoisoning and memory initialization code must be
* kept together to avoid discrepancies in behavior.
*/
@@ -2946,9 +2945,9 @@ static bool free_frozen_page_commit(struct zone *zone,
* 'hopeless node' to stay in that state for a while. Let
* kswapd work again by resetting kswapd_failures.
*/
if (atomic_read(&pgdat->kswapd_failures) >= MAX_RECLAIM_RETRIES &&
if (kswapd_test_hopeless(pgdat) &&
next_memory_node(pgdat->node_id) < MAX_NUMNODES)
atomic_set(&pgdat->kswapd_failures, 0);
kswapd_clear_hopeless(pgdat, KSWAPD_CLEAR_HOPELESS_PCP);
}
return ret;
}
@@ -3112,6 +3111,15 @@ void free_unref_folios(struct folio_batch *folios)
folio_batch_reinit(folios);
}
static void __split_page(struct page *page, unsigned int order)
{
VM_WARN_ON_PAGE(PageCompound(page), page);
split_page_owner(page, order, 0);
pgalloc_tag_split(page_folio(page), order, 0);
split_page_memcg(page, order);
}
/*
* split_page takes a non-compound higher-order page, and splits it into
* n (1<<order) sub-pages: page[0..n]
@@ -3124,14 +3132,12 @@ void split_page(struct page *page, unsigned int order)
{
int i;
VM_BUG_ON_PAGE(PageCompound(page), page);
VM_BUG_ON_PAGE(!page_count(page), page);
VM_WARN_ON_PAGE(!page_count(page), page);
for (i = 1; i < (1 << order); i++)
set_page_refcounted(page + i);
split_page_owner(page, order, 0);
pgalloc_tag_split(page_folio(page), order, 0);
split_page_memcg(page, order);
__split_page(page, order);
}
EXPORT_SYMBOL_GPL(split_page);
@@ -4699,7 +4705,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
bool can_direct_reclaim = gfp_mask & __GFP_DIRECT_RECLAIM;
bool can_compact = gfp_compaction_allowed(gfp_mask);
bool can_compact = can_direct_reclaim && gfp_compaction_allowed(gfp_mask);
bool nofail = gfp_mask & __GFP_NOFAIL;
const bool costly_order = order > PAGE_ALLOC_COSTLY_ORDER;
struct page *page = NULL;
@@ -4712,6 +4718,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
unsigned int cpuset_mems_cookie;
unsigned int zonelist_iter_cookie;
int reserve_flags;
bool compact_first = false;
bool can_retry_reserves = true;
if (unlikely(nofail)) {
/*
@@ -4735,6 +4743,19 @@ restart:
cpuset_mems_cookie = read_mems_allowed_begin();
zonelist_iter_cookie = zonelist_iter_begin();
/*
* For costly allocations, try direct compaction first, as it's likely
* that we have enough base pages and don't need to reclaim. For non-
* movable high-order allocations, do that as well, as compaction will
* try prevent permanent fragmentation by migrating from blocks of the
* same migratetype.
*/
if (can_compact && (costly_order || (order > 0 &&
ac->migratetype != MIGRATE_MOVABLE))) {
compact_first = true;
compact_priority = INIT_COMPACT_PRIORITY;
}
/*
* The fast path uses conservative alloc_flags to succeed only until
* kswapd needs to be woken up, and to avoid the cost of setting up
@@ -4766,6 +4787,8 @@ restart:
goto nopage;
}
retry:
/* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
if (alloc_flags & ALLOC_KSWAPD)
wake_all_kswapds(order, gfp_mask, ac);
@@ -4777,74 +4800,6 @@ restart:
if (page)
goto got_pg;
/*
* For costly allocations, try direct compaction first, as it's likely
* that we have enough base pages and don't need to reclaim. For non-
* movable high-order allocations, do that as well, as compaction will
* try prevent permanent fragmentation by migrating from blocks of the
* same migratetype.
* Don't try this for allocations that are allowed to ignore
* watermarks, as the ALLOC_NO_WATERMARKS attempt didn't yet happen.
*/
if (can_direct_reclaim && can_compact &&
(costly_order ||
(order > 0 && ac->migratetype != MIGRATE_MOVABLE))
&& !gfp_pfmemalloc_allowed(gfp_mask)) {
page = __alloc_pages_direct_compact(gfp_mask, order,
alloc_flags, ac,
INIT_COMPACT_PRIORITY,
&compact_result);
if (page)
goto got_pg;
/*
* Checks for costly allocations with __GFP_NORETRY, which
* includes some THP page fault allocations
*/
if (costly_order && (gfp_mask & __GFP_NORETRY)) {
/*
* If allocating entire pageblock(s) and compaction
* failed because all zones are below low watermarks
* or is prohibited because it recently failed at this
* order, fail immediately unless the allocator has
* requested compaction and reclaim retry.
*
* Reclaim is
* - potentially very expensive because zones are far
* below their low watermarks or this is part of very
* bursty high order allocations,
* - not guaranteed to help because isolate_freepages()
* may not iterate over freed pages as part of its
* linear scan, and
* - unlikely to make entire pageblocks free on its
* own.
*/
if (compact_result == COMPACT_SKIPPED ||
compact_result == COMPACT_DEFERRED)
goto nopage;
/*
* Looks like reclaim/compaction is worth trying, but
* sync compaction could be very expensive, so keep
* using async compaction.
*/
compact_priority = INIT_COMPACT_PRIORITY;
}
}
retry:
/*
* Deal with possible cpuset update races or zonelist updates to avoid
* infinite retries.
*/
if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
check_retry_zonelist(zonelist_iter_cookie))
goto restart;
/* Ensure kswapd doesn't accidentally go to sleep as long as we loop */
if (alloc_flags & ALLOC_KSWAPD)
wake_all_kswapds(order, gfp_mask, ac);
reserve_flags = __gfp_pfmemalloc_flags(gfp_mask);
if (reserve_flags)
alloc_flags = gfp_to_alloc_flags_cma(gfp_mask, reserve_flags) |
@@ -4859,12 +4814,18 @@ retry:
ac->nodemask = NULL;
ac->preferred_zoneref = first_zones_zonelist(ac->zonelist,
ac->highest_zoneidx, ac->nodemask);
}
/* Attempt with potentially adjusted zonelist and alloc_flags */
page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
if (page)
goto got_pg;
/*
* The first time we adjust anything due to being allowed to
* ignore memory policies or watermarks, retry immediately. This
* allows us to keep the first allocation attempt optimistic so
* it can succeed in a zone that is still above watermarks.
*/
if (can_retry_reserves) {
can_retry_reserves = false;
goto retry;
}
}
/* Caller is not willing to reclaim, we can't balance anything */
if (!can_direct_reclaim)
@@ -4875,10 +4836,12 @@ retry:
goto nopage;
/* Try direct reclaim and then allocating */
page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags, ac,
&did_some_progress);
if (page)
goto got_pg;
if (!compact_first) {
page = __alloc_pages_direct_reclaim(gfp_mask, order, alloc_flags,
ac, &did_some_progress);
if (page)
goto got_pg;
}
/* Try direct compaction and then allocating */
page = __alloc_pages_direct_compact(gfp_mask, order, alloc_flags, ac,
@@ -4886,6 +4849,33 @@ retry:
if (page)
goto got_pg;
if (compact_first) {
/*
* THP page faults may attempt local node only first, but are
* then allowed to only compact, not reclaim, see
* alloc_pages_mpol().
*
* Compaction has failed above and we don't want such THP
* allocations to put reclaim pressure on a single node in a
* situation where other nodes might have plenty of available
* memory.
*/
if (gfp_has_flags(gfp_mask, __GFP_NORETRY | __GFP_THISNODE))
goto nopage;
/*
* For the initial compaction attempt we have lowered its
* priority. Restore it for further retries, if those are
* allowed. With __GFP_NORETRY there will be a single round of
* reclaim and compaction with the lowered priority.
*/
if (!(gfp_mask & __GFP_NORETRY))
compact_priority = DEF_COMPACT_PRIORITY;
compact_first = false;
goto retry;
}
/* Do not loop if specifically requested */
if (gfp_mask & __GFP_NORETRY)
goto nopage;
@@ -4898,6 +4888,15 @@ retry:
!(gfp_mask & __GFP_RETRY_MAYFAIL)))
goto nopage;
/*
* Deal with possible cpuset update races or zonelist updates to avoid
* infinite retries. No "goto retry;" can be placed above this check
* unless it can execute just once.
*/
if (check_retry_cpuset(cpuset_mems_cookie, ac) ||
check_retry_zonelist(zonelist_iter_cookie))
goto restart;
if (should_reclaim_retry(gfp_mask, order, ac, alloc_flags,
did_some_progress > 0, &no_progress_loops))
goto retry;
@@ -5401,9 +5400,7 @@ static void *make_alloc_exact(unsigned long addr, unsigned int order,
struct page *page = virt_to_page((void *)addr);
struct page *last = page + nr;
split_page_owner(page, order, 0);
pgalloc_tag_split(page_folio(page), order, 0);
split_page_memcg(page, order);
__split_page(page, order);
while (page < --last)
set_page_refcounted(last);
@@ -6896,7 +6893,7 @@ static int __alloc_contig_migrate_range(struct compact_control *cc,
return (ret < 0) ? ret : 0;
}
static void split_free_pages(struct list_head *list, gfp_t gfp_mask)
static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
{
int order;
@@ -6908,11 +6905,10 @@ static void split_free_pages(struct list_head *list, gfp_t gfp_mask)
int i;
post_alloc_hook(page, order, gfp_mask);
set_page_refcounted(page);
if (!order)
continue;
split_page(page, order);
__split_page(page, order);
/* Add all subpages to the order-0 head, in sequence. */
list_del(&page->lru);
@@ -6956,8 +6952,14 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
return 0;
}
static void __free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
{
for (; nr_pages--; pfn++)
free_frozen_pages(pfn_to_page(pfn), 0);
}
/**
* alloc_contig_range() -- tries to allocate given range of pages
* alloc_contig_frozen_range() -- tries to allocate given range of frozen pages
* @start: start PFN to allocate
* @end: one-past-the-last PFN to allocate
* @alloc_flags: allocation information
@@ -6972,12 +6974,15 @@ static int __alloc_contig_verify_gfp_mask(gfp_t gfp_mask, gfp_t *gfp_cc_mask)
* pageblocks in the range. Once isolated, the pageblocks should not
* be modified by others.
*
* Return: zero on success or negative error code. On success all
* pages which PFN is in [start, end) are allocated for the caller and
* need to be freed with free_contig_range().
* All frozen pages which PFN is in [start, end) are allocated for the
* caller, and they could be freed with free_contig_frozen_range(),
* free_frozen_pages() also could be used to free compound frozen pages
* directly.
*
* Return: zero on success or negative error code.
*/
int alloc_contig_range_noprof(unsigned long start, unsigned long end,
acr_flags_t alloc_flags, gfp_t gfp_mask)
int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
acr_flags_t alloc_flags, gfp_t gfp_mask)
{
const unsigned int order = ilog2(end - start);
unsigned long outer_start, outer_end;
@@ -7093,19 +7098,18 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
}
if (!(gfp_mask & __GFP_COMP)) {
split_free_pages(cc.freepages, gfp_mask);
split_free_frozen_pages(cc.freepages, gfp_mask);
/* Free head and tail (if any) */
if (start != outer_start)
free_contig_range(outer_start, start - outer_start);
__free_contig_frozen_range(outer_start, start - outer_start);
if (end != outer_end)
free_contig_range(end, outer_end - end);
__free_contig_frozen_range(end, outer_end - end);
} else if (start == outer_start && end == outer_end && is_power_of_2(end - start)) {
struct page *head = pfn_to_page(start);
check_new_pages(head, order);
prep_new_page(head, order, gfp_mask, 0);
set_page_refcounted(head);
} else {
ret = -EINVAL;
WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
@@ -7115,36 +7119,86 @@ done:
undo_isolate_page_range(start, end);
return ret;
}
EXPORT_SYMBOL(alloc_contig_frozen_range_noprof);
/**
* alloc_contig_range() -- tries to allocate given range of pages
* @start: start PFN to allocate
* @end: one-past-the-last PFN to allocate
* @alloc_flags: allocation information
* @gfp_mask: GFP mask.
*
* This routine is a wrapper around alloc_contig_frozen_range(), it can't
* be used to allocate compound pages, the refcount of each allocated page
* will be set to one.
*
* All pages which PFN is in [start, end) are allocated for the caller,
* and should be freed with free_contig_range() or by manually calling
* __free_page() on each allocated page.
*
* Return: zero on success or negative error code.
*/
int alloc_contig_range_noprof(unsigned long start, unsigned long end,
acr_flags_t alloc_flags, gfp_t gfp_mask)
{
int ret;
if (WARN_ON(gfp_mask & __GFP_COMP))
return -EINVAL;
ret = alloc_contig_frozen_range_noprof(start, end, alloc_flags, gfp_mask);
if (!ret)
set_pages_refcounted(pfn_to_page(start), end - start);
return ret;
}
EXPORT_SYMBOL(alloc_contig_range_noprof);
static int __alloc_contig_pages(unsigned long start_pfn,
unsigned long nr_pages, gfp_t gfp_mask)
static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
unsigned long nr_pages, bool skip_hugetlb,
bool *skipped_hugetlb)
{
unsigned long end_pfn = start_pfn + nr_pages;
return alloc_contig_range_noprof(start_pfn, end_pfn, ACR_FLAGS_NONE,
gfp_mask);
}
static bool pfn_range_valid_contig(struct zone *z, unsigned long start_pfn,
unsigned long nr_pages)
{
unsigned long i, end_pfn = start_pfn + nr_pages;
struct page *page;
for (i = start_pfn; i < end_pfn; i++) {
page = pfn_to_online_page(i);
while (start_pfn < end_pfn) {
unsigned long step = 1;
page = pfn_to_online_page(start_pfn);
if (!page)
return false;
if (page_zone(page) != z)
return false;
if (PageReserved(page))
if (page_is_unmovable(z, page, PB_ISOLATE_MODE_OTHER, &step))
return false;
if (PageHuge(page))
return false;
/*
* Only consider ranges containing hugepages if those pages are
* smaller than the requested contiguous region. e.g.:
* Move 2MB pages to free up a 1GB range.
* Don't move 1GB pages to free up a 2MB range.
*
* This makes contiguous allocation more reliable if multiple
* hugepage sizes are used without causing needless movement.
*/
if (PageHuge(page)) {
unsigned int order;
if (skip_hugetlb) {
*skipped_hugetlb = true;
return false;
}
page = compound_head(page);
order = compound_order(page);
if ((order >= MAX_FOLIO_ORDER) ||
(nr_pages <= (1 << order)))
return false;
}
start_pfn += step;
}
return true;
}
@@ -7158,7 +7212,7 @@ static bool zone_spans_last_pfn(const struct zone *zone,
}
/**
* alloc_contig_pages() -- tries to find and allocate contiguous range of pages
* alloc_contig_frozen_pages() -- tries to find and allocate contiguous range of frozen pages
* @nr_pages: Number of contiguous pages to allocate
* @gfp_mask: GFP mask. Node/zone/placement hints limit the search; only some
* action and reclaim modifiers are supported. Reclaim modifiers
@@ -7166,28 +7220,34 @@ static bool zone_spans_last_pfn(const struct zone *zone,
* @nid: Target node
* @nodemask: Mask for other possible nodes
*
* This routine is a wrapper around alloc_contig_range(). It scans over zones
* on an applicable zonelist to find a contiguous pfn range which can then be
* tried for allocation with alloc_contig_range(). This routine is intended
* for allocation requests which can not be fulfilled with the buddy allocator.
* This routine is a wrapper around alloc_contig_frozen_range(). It scans over
* zones on an applicable zonelist to find a contiguous pfn range which can then
* be tried for allocation with alloc_contig_frozen_range(). This routine is
* intended for allocation requests which can not be fulfilled with the buddy
* allocator.
*
* The allocated memory is always aligned to a page boundary. If nr_pages is a
* power of two, then allocated range is also guaranteed to be aligned to same
* nr_pages (e.g. 1GB request would be aligned to 1GB).
*
* Allocated pages can be freed with free_contig_range() or by manually calling
* __free_page() on each allocated page.
* Allocated frozen pages need be freed with free_contig_frozen_range(),
* or by manually calling free_frozen_pages() on each allocated frozen
* non-compound page, for compound frozen pages could be freed with
* free_frozen_pages() directly.
*
* Return: pointer to contiguous pages on success, or NULL if not successful.
* Return: pointer to contiguous frozen pages on success, or NULL if not successful.
*/
struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
struct page *alloc_contig_frozen_pages_noprof(unsigned long nr_pages,
gfp_t gfp_mask, int nid, nodemask_t *nodemask)
{
unsigned long ret, pfn, flags;
struct zonelist *zonelist;
struct zone *zone;
struct zoneref *z;
bool skip_hugetlb = true;
bool skipped_hugetlb = false;
retry:
zonelist = node_zonelist(nid, gfp_mask);
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(gfp_mask), nodemask) {
@@ -7195,16 +7255,20 @@ struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask,
pfn = ALIGN(zone->zone_start_pfn, nr_pages);
while (zone_spans_last_pfn(zone, pfn, nr_pages)) {
if (pfn_range_valid_contig(zone, pfn, nr_pages)) {
if (pfn_range_valid_contig(zone, pfn, nr_pages,
skip_hugetlb,
&skipped_hugetlb)) {
/*
* We release the zone lock here because
* alloc_contig_range() will also lock the zone
* at some point. If there's an allocation
* spinning on this lock, it may win the race
* and cause alloc_contig_range() to fail...
* alloc_contig_frozen_range() will also lock
* the zone at some point. If there's an
* allocation spinning on this lock, it may
* win the race and cause allocation to fail.
*/
spin_unlock_irqrestore(&zone->lock, flags);
ret = __alloc_contig_pages(pfn, nr_pages,
ret = alloc_contig_frozen_range_noprof(pfn,
pfn + nr_pages,
ACR_FLAGS_NONE,
gfp_mask);
if (!ret)
return pfn_to_page(pfn);
@@ -7214,35 +7278,96 @@ struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask,
}
spin_unlock_irqrestore(&zone->lock, flags);
}
/*
* If we failed, retry the search, but treat regions with HugeTLB pages
* as valid targets. This retains fast-allocations on first pass
* without trying to migrate HugeTLB pages (which may fail). On the
* second pass, we will try moving HugeTLB pages when those pages are
* smaller than the requested contiguous region size.
*/
if (skip_hugetlb && skipped_hugetlb) {
skip_hugetlb = false;
goto retry;
}
return NULL;
}
#endif /* CONFIG_CONTIG_ALLOC */
EXPORT_SYMBOL(alloc_contig_frozen_pages_noprof);
void free_contig_range(unsigned long pfn, unsigned long nr_pages)
/**
* alloc_contig_pages() -- tries to find and allocate contiguous range of pages
* @nr_pages: Number of contiguous pages to allocate
* @gfp_mask: GFP mask.
* @nid: Target node
* @nodemask: Mask for other possible nodes
*
* This routine is a wrapper around alloc_contig_frozen_pages(), it can't
* be used to allocate compound pages, the refcount of each allocated page
* will be set to one.
*
* Allocated pages can be freed with free_contig_range() or by manually
* calling __free_page() on each allocated page.
*
* Return: pointer to contiguous pages on success, or NULL if not successful.
*/
struct page *alloc_contig_pages_noprof(unsigned long nr_pages, gfp_t gfp_mask,
int nid, nodemask_t *nodemask)
{
unsigned long count = 0;
struct folio *folio = pfn_folio(pfn);
struct page *page;
if (folio_test_large(folio)) {
int expected = folio_nr_pages(folio);
if (WARN_ON(gfp_mask & __GFP_COMP))
return NULL;
if (nr_pages == expected)
folio_put(folio);
else
WARN(true, "PFN %lu: nr_pages %lu != expected %d\n",
pfn, nr_pages, expected);
page = alloc_contig_frozen_pages_noprof(nr_pages, gfp_mask, nid,
nodemask);
if (page)
set_pages_refcounted(page, nr_pages);
return page;
}
EXPORT_SYMBOL(alloc_contig_pages_noprof);
/**
* free_contig_frozen_range() -- free the contiguous range of frozen pages
* @pfn: start PFN to free
* @nr_pages: Number of contiguous frozen pages to free
*
* This can be used to free the allocated compound/non-compound frozen pages.
*/
void free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages)
{
struct page *first_page = pfn_to_page(pfn);
const unsigned int order = ilog2(nr_pages);
if (WARN_ON_ONCE(first_page != compound_head(first_page)))
return;
if (PageHead(first_page)) {
WARN_ON_ONCE(order != compound_order(first_page));
free_frozen_pages(first_page, order);
return;
}
for (; nr_pages--; pfn++) {
struct page *page = pfn_to_page(pfn);
__free_contig_frozen_range(pfn, nr_pages);
}
EXPORT_SYMBOL(free_contig_frozen_range);
count += page_count(page) != 1;
__free_page(page);
}
WARN(count != 0, "%lu pages are still in use!\n", count);
/**
* free_contig_range() -- free the contiguous range of pages
* @pfn: start PFN to free
* @nr_pages: Number of contiguous pages to free
*
* This can be only used to free the allocated non-compound pages.
*/
void free_contig_range(unsigned long pfn, unsigned long nr_pages)
{
if (WARN_ON_ONCE(PageHead(pfn_to_page(pfn))))
return;
for (; nr_pages--; pfn++)
__free_page(pfn_to_page(pfn));
}
EXPORT_SYMBOL(free_contig_range);
#endif /* CONFIG_CONTIG_ALLOC */
/*
* Effectively disable pcplists for the zone by setting the high limit to 0
@@ -7658,7 +7783,7 @@ struct page *alloc_frozen_pages_nolock_noprof(gfp_t gfp_flags, int nid, unsigned
* unsafe in NMI. If spin_trylock() is called from hard IRQ the current
* task may be waiting for one rt_spin_lock, but rt_spin_trylock() will
* mark the task as the owner of another rt_spin_lock which will
* confuse PI logic, so return immediately if called form hard IRQ or
* confuse PI logic, so return immediately if called from hard IRQ or
* NMI.
*
* Note, irqs_disabled() case is ok. This function can be called