Merge tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM updates from Andrew Morton:

 - "powerpc/64s: do not re-activate batched TLB flush" makes
   arch_{enter|leave}_lazy_mmu_mode() nest properly (Alexander Gordeev)

   It adds a generic enter/leave layer and switches architectures to use
   it. Various hacks were removed in the process.

 - "zram: introduce compressed data writeback" implements data
   compression for zram writeback (Richard Chang and Sergey Senozhatsky)

 - "mm: folio_zero_user: clear page ranges" adds clearing of contiguous
   page ranges for hugepages. Large improvements during demand faulting
   are demonstrated (David Hildenbrand)

 - "memcg cleanups" tidies up some memcg code (Chen Ridong)

 - "mm/damon: introduce {,max_}nr_snapshots and tracepoint for damos
   stats" improves DAMOS stat's provided information, deterministic
   control, and readability (SeongJae Park)

 - "selftests/mm: hugetlb cgroup charging: robustness fixes" fixes a few
   issues in the hugetlb cgroup charging selftests (Li Wang)

 - "Fix va_high_addr_switch.sh test failure - again" addresses several
   issues in the va_high_addr_switch test (Chunyu Hu)

 - "mm/damon/tests/core-kunit: extend existing test scenarios" improves
   the KUnit test coverage for DAMON (Shu Anzai)

 - "mm/khugepaged: fix dirty page handling for MADV_COLLAPSE" fixes a
   glitch in khugepaged which was causing madvise(MADV_COLLAPSE) to
   transiently return -EAGAIN (Shivank Garg)

 - "arch, mm: consolidate hugetlb early reservation" reworks and
   consolidates a pile of straggly code related to reservation of
   hugetlb memory from bootmem and creation of CMA areas for hugetlb
   (Mike Rapoport)

 - "mm: clean up anon_vma implementation" cleans up the anon_vma
   implementation in various ways (Lorenzo Stoakes)

 - "tweaks for __alloc_pages_slowpath()" does a little streamlining of
   the page allocator's slowpath code (Vlastimil Babka)

 - "memcg: separate private and public ID namespaces" cleans up the
   memcg ID code and prevents the internal-only private IDs from being
   exposed to userspace (Shakeel Butt)

 - "mm: hugetlb: allocate frozen gigantic folio" cleans up the
   allocation of frozen folios and avoids some atomic refcount
   operations (Kefeng Wang)

 - "mm/damon: advance DAMOS-based LRU sorting" improves DAMOS's movement
   of memory betewwn the active and inactive LRUs and adds auto-tuning
   of the ratio-based quotas and of monitoring intervals (SeongJae Park)

 - "Support page table check on PowerPC" makes
   CONFIG_PAGE_TABLE_CHECK_ENFORCED work on powerpc (Andrew Donnellan)

 - "nodemask: align nodes_and{,not} with underlying bitmap ops" makes
   nodes_and() and nodes_andnot() propagate the return values from the
   underlying bit operations, enabling some cleanup in calling code
   (Yury Norov)

 - "mm/damon: hide kdamond and kdamond_lock from API callers" cleans up
   some DAMON internal interfaces (SeongJae Park)

 - "mm/khugepaged: cleanups and scan limit fix" does some cleanup work
   in khupaged and fixes a scan limit accounting issue (Shivank Garg)

 - "mm: balloon infrastructure cleanups" goes to town on the balloon
   infrastructure and its page migration function. Mainly cleanups, also
   some locking simplification (David Hildenbrand)

 - "mm/vmscan: add tracepoint and reason for kswapd_failures reset" adds
   additional tracepoints to the page reclaim code (Jiayuan Chen)

 - "Replace wq users and add WQ_PERCPU to alloc_workqueue() users" is
   part of Marco's kernel-wide migration from the legacy workqueue APIs
   over to the preferred unbound workqueues (Marco Crivellari)

 - "Various mm kselftests improvements/fixes" provides various unrelated
   improvements/fixes for the mm kselftests (Kevin Brodsky)

 - "mm: accelerate gigantic folio allocation" greatly speeds up gigantic
   folio allocation, mainly by avoiding unnecessary work in
   pfn_range_valid_contig() (Kefeng Wang)

 - "selftests/damon: improve leak detection and wss estimation
   reliability" improves the reliability of two of the DAMON selftests
   (SeongJae Park)

 - "mm/damon: cleanup kdamond, damon_call(), damos filter and
   DAMON_MIN_REGION" does some cleanup work in the core DAMON code
   (SeongJae Park)

 - "Docs/mm/damon: update intro, modules, maintainer profile, and misc"
   performs maintenance work on the DAMON documentation (SeongJae Park)

 - "mm: add and use vma_assert_stabilised() helper" refactors and cleans
   up the core VMA code. The main aim here is to be able to use the mmap
   write lock's lockdep state to perform various assertions regarding
   the locking which the VMA code requires (Lorenzo Stoakes)

 - "mm, swap: swap table phase II: unify swapin use" removes some old
   swap code (swap cache bypassing and swap synchronization) which
   wasn't working very well. Various other cleanups and simplifications
   were made. The end result is a 20% speedup in one benchmark (Kairui
   Song)

 - "enable PT_RECLAIM on more 64-bit architectures" makes PT_RECLAIM
   available on 64-bit alpha, loongarch, mips, parisc, and um. Various
   cleanups were performed along the way (Qi Zheng)

* tag 'mm-stable-2026-02-11-19-22' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (325 commits)
  mm/memory: handle non-split locks correctly in zap_empty_pte_table()
  mm: move pte table reclaim code to memory.c
  mm: make PT_RECLAIM depends on MMU_GATHER_RCU_TABLE_FREE
  mm: convert __HAVE_ARCH_TLB_REMOVE_TABLE to CONFIG_HAVE_ARCH_TLB_REMOVE_TABLE config
  um: mm: enable MMU_GATHER_RCU_TABLE_FREE
  parisc: mm: enable MMU_GATHER_RCU_TABLE_FREE
  mips: mm: enable MMU_GATHER_RCU_TABLE_FREE
  LoongArch: mm: enable MMU_GATHER_RCU_TABLE_FREE
  alpha: mm: enable MMU_GATHER_RCU_TABLE_FREE
  mm: change mm/pt_reclaim.c to use asm/tlb.h instead of asm-generic/tlb.h
  mm/damon/stat: remove __read_mostly from memory_idle_ms_percentiles
  zsmalloc: make common caches global
  mm: add SPDX id lines to some mm source files
  mm/zswap: use %pe to print error pointers
  mm/vmscan: use %pe to print error pointers
  mm/readahead: fix typo in comment
  mm: khugepaged: fix NR_FILE_PAGES and NR_SHMEM in collapse_file()
  mm: refactor vma_map_pages to use vm_insert_pages
  mm/damon: unify address range representation with damon_addr_range
  mm/cma: replace snprintf with strscpy in cma_new_area
  ...
This commit is contained in:
Linus Torvalds
2026-02-12 11:32:37 -08:00
332 changed files with 6257 additions and 5611 deletions

View File

@@ -8,6 +8,11 @@
#include <string.h>
#include <time.h>
enum access_mode {
ACCESS_MODE_ONCE,
ACCESS_MODE_REPEAT,
};
int main(int argc, char *argv[])
{
char **regions;
@@ -15,10 +20,12 @@ int main(int argc, char *argv[])
int nr_regions;
int sz_region;
int access_time_ms;
enum access_mode mode = ACCESS_MODE_ONCE;
int i;
if (argc != 4) {
printf("Usage: %s <number> <size (bytes)> <time (ms)>\n",
if (argc < 4) {
printf("Usage: %s <number> <size (bytes)> <time (ms)> [mode]\n",
argv[0]);
return -1;
}
@@ -27,15 +34,21 @@ int main(int argc, char *argv[])
sz_region = atoi(argv[2]);
access_time_ms = atoi(argv[3]);
if (argc > 4 && !strcmp(argv[4], "repeat"))
mode = ACCESS_MODE_REPEAT;
regions = malloc(sizeof(*regions) * nr_regions);
for (i = 0; i < nr_regions; i++)
regions[i] = malloc(sz_region);
for (i = 0; i < nr_regions; i++) {
start_clock = clock();
while ((clock() - start_clock) * 1000 / CLOCKS_PER_SEC <
access_time_ms)
memset(regions[i], i, sz_region);
}
do {
for (i = 0; i < nr_regions; i++) {
start_clock = clock();
while ((clock() - start_clock) * 1000 / CLOCKS_PER_SEC
< access_time_ms)
memset(regions[i], i, sz_region);
}
} while (mode == ACCESS_MODE_REPEAT);
return 0;
}

View File

@@ -14,6 +14,13 @@ then
exit $ksft_skip
fi
kmemleak="/sys/kernel/debug/kmemleak"
if [ ! -f "$kmemleak" ]
then
echo "$kmemleak not found"
exit $ksft_skip
fi
# ensure filter directory
echo 1 > "$damon_sysfs/kdamonds/nr_kdamonds"
echo 1 > "$damon_sysfs/kdamonds/0/contexts/nr_contexts"
@@ -22,22 +29,17 @@ echo 1 > "$damon_sysfs/kdamonds/0/contexts/0/schemes/0/filters/nr_filters"
filter_dir="$damon_sysfs/kdamonds/0/contexts/0/schemes/0/filters/0"
before_kb=$(grep Slab /proc/meminfo | awk '{print $2}')
# try to leak 3000 KiB
for i in {1..102400};
# try to leak 128 times
for i in {1..128};
do
echo "012345678901234567890123456789" > "$filter_dir/memcg_path"
done
after_kb=$(grep Slab /proc/meminfo | awk '{print $2}')
# expect up to 1500 KiB free from other tasks memory
expected_after_kb_max=$((before_kb + 1500))
if [ "$after_kb" -gt "$expected_after_kb_max" ]
echo scan > "$kmemleak"
kmemleak_report=$(cat "$kmemleak")
if [ "$kmemleak_report" = "" ]
then
echo "maybe memcg_path are leaking: $before_kb -> $after_kb"
exit 1
else
exit 0
fi
echo "$kmemleak_report"
exit 1

View File

@@ -6,10 +6,10 @@ import time
import _damon_sysfs
def main():
# access two 10 MiB memory regions, 2 second per each
sz_region = 10 * 1024 * 1024
proc = subprocess.Popen(['./access_memory', '2', '%d' % sz_region, '2000'])
def pass_wss_estimation(sz_region):
# access two regions of given size, 2 seocnds per each region
proc = subprocess.Popen(
['./access_memory', '2', '%d' % sz_region, '2000', 'repeat'])
kdamonds = _damon_sysfs.Kdamonds([_damon_sysfs.Kdamond(
contexts=[_damon_sysfs.DamonCtx(
ops='vaddr',
@@ -27,7 +27,7 @@ def main():
exit(1)
wss_collected = []
while proc.poll() == None:
while proc.poll() is None and len(wss_collected) < 40:
time.sleep(0.1)
err = kdamonds.kdamonds[0].update_schemes_tried_bytes()
if err != None:
@@ -36,20 +36,43 @@ def main():
wss_collected.append(
kdamonds.kdamonds[0].contexts[0].schemes[0].tried_bytes)
proc.terminate()
err = kdamonds.stop()
if err is not None:
print('kdamond stop failed: %s' % err)
exit(1)
wss_collected.sort()
acceptable_error_rate = 0.2
for percentile in [50, 75]:
sample = wss_collected[int(len(wss_collected) * percentile / 100)]
error_rate = abs(sample - sz_region) / sz_region
print('%d-th percentile (%d) error %f' %
(percentile, sample, error_rate))
print('%d-th percentile error %f (expect %d, result %d)' %
(percentile, error_rate, sz_region, sample))
if error_rate > acceptable_error_rate:
print('the error rate is not acceptable (> %f)' %
acceptable_error_rate)
print('samples are as below')
print('\n'.join(['%d' % wss for wss in wss_collected]))
exit(1)
for idx, wss in enumerate(wss_collected):
if idx < len(wss_collected) - 1 and \
wss_collected[idx + 1] == wss:
continue
print('%d/%d: %d' % (idx, len(wss_collected), wss))
return False
return True
def main():
# DAMON doesn't flush TLB. If the system has large TLB that can cover
# whole test working set, DAMON cannot see the access. Test up to 160 MiB
# test working set.
sz_region_mb = 10
max_sz_region_mb = 160
while sz_region_mb <= max_sz_region_mb:
test_pass = pass_wss_estimation(sz_region_mb * 1024 * 1024)
if test_pass is True:
exit(0)
sz_region_mb *= 2
exit(1)
if __name__ == '__main__':
main()

View File

@@ -32,7 +32,6 @@ uffd-unit-tests
uffd-wp-mremap
mlock-intersect-test
mlock-random-test
virtual_address_range
gup_test
va_128TBswitch
map_fixed_noreplace

View File

@@ -1,6 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
# Makefile for mm selftests
# IMPORTANT: If you add a new test CATEGORY please add a simple wrapper
# script so kunit knows to run it, and add it to the list below.
# If you do not YOUR TESTS WILL NOT RUN IN THE CI.
LOCAL_HDRS += $(selfdir)/mm/local_config.h $(top_srcdir)/mm/gup_test.h
LOCAL_HDRS += $(selfdir)/mm/mseal_helpers.h
@@ -44,14 +48,10 @@ LDLIBS = -lrt -lpthread -lm
# warnings.
CFLAGS += -U_FORTIFY_SOURCE
KDIR ?= /lib/modules/$(shell uname -r)/build
KDIR ?= $(if $(O),$(O),$(realpath ../../../..))
ifneq (,$(wildcard $(KDIR)/Module.symvers))
ifneq (,$(wildcard $(KDIR)/include/linux/page_frag_cache.h))
TEST_GEN_MODS_DIR := page_frag
else
PAGE_FRAG_WARNING = "missing page_frag_cache.h, please use a newer kernel"
endif
else
PAGE_FRAG_WARNING = "missing Module.symvers, please have the kernel built first"
endif
@@ -140,13 +140,36 @@ endif
ifneq (,$(filter $(ARCH),arm64 mips64 parisc64 powerpc riscv64 s390x sparc64 x86_64 s390))
TEST_GEN_FILES += va_high_addr_switch
ifneq ($(ARCH),riscv64)
TEST_GEN_FILES += virtual_address_range
endif
TEST_GEN_FILES += write_to_hugetlbfs
endif
TEST_PROGS := run_vmtests.sh
TEST_PROGS += ksft_compaction.sh
TEST_PROGS += ksft_cow.sh
TEST_PROGS += ksft_gup_test.sh
TEST_PROGS += ksft_hmm.sh
TEST_PROGS += ksft_hugetlb.sh
TEST_PROGS += ksft_hugevm.sh
TEST_PROGS += ksft_ksm.sh
TEST_PROGS += ksft_ksm_numa.sh
TEST_PROGS += ksft_madv_guard.sh
TEST_PROGS += ksft_madv_populate.sh
TEST_PROGS += ksft_memfd_secret.sh
TEST_PROGS += ksft_migration.sh
TEST_PROGS += ksft_mkdirty.sh
TEST_PROGS += ksft_mlock.sh
TEST_PROGS += ksft_mmap.sh
TEST_PROGS += ksft_mremap.sh
TEST_PROGS += ksft_pagemap.sh
TEST_PROGS += ksft_pfnmap.sh
TEST_PROGS += ksft_pkey.sh
TEST_PROGS += ksft_process_madv.sh
TEST_PROGS += ksft_process_mrelease.sh
TEST_PROGS += ksft_rmap.sh
TEST_PROGS += ksft_soft_dirty.sh
TEST_PROGS += ksft_thp.sh
TEST_PROGS += ksft_userfaultfd.sh
TEST_PROGS += ksft_vma_merge.sh
TEST_PROGS += ksft_vmalloc.sh
TEST_FILES := test_vmalloc.sh
TEST_FILES += test_hmm.sh
@@ -154,6 +177,7 @@ TEST_FILES += va_high_addr_switch.sh
TEST_FILES += charge_reserved_hugetlb.sh
TEST_FILES += hugetlb_reparenting_test.sh
TEST_FILES += test_page_frag.sh
TEST_FILES += run_vmtests.sh
# required by charge_reserved_hugetlb.sh
TEST_FILES += write_hugetlb_memory.sh
@@ -234,7 +258,7 @@ $(OUTPUT)/migration: LDLIBS += -lnuma
$(OUTPUT)/rmap: LDLIBS += -lnuma
local_config.mk local_config.h: check_config.sh
/bin/sh ./check_config.sh $(CC)
CC="$(CC)" CFLAGS="$(CFLAGS)" ./check_config.sh
EXTRA_CLEAN += local_config.mk local_config.h

View File

@@ -100,7 +100,7 @@ function setup_cgroup() {
echo writing cgroup limit: "$cgroup_limit"
echo "$cgroup_limit" >$cgroup_path/$name/hugetlb.${MB}MB.$fault_limit_file
echo writing reseravation limit: "$reservation_limit"
echo writing reservation limit: "$reservation_limit"
echo "$reservation_limit" > \
$cgroup_path/$name/hugetlb.${MB}MB.$reservation_limit_file
@@ -112,41 +112,50 @@ function setup_cgroup() {
fi
}
function wait_for_file_value() {
local path="$1"
local expect="$2"
local max_tries=60
if [[ ! -r "$path" ]]; then
echo "ERROR: cannot read '$path', missing or permission denied"
return 1
fi
for ((i=1; i<=max_tries; i++)); do
local cur="$(cat "$path")"
if [[ "$cur" == "$expect" ]]; then
return 0
fi
echo "Waiting for $path to become '$expect' (current: '$cur') (try $i/$max_tries)"
sleep 1
done
echo "ERROR: timeout waiting for $path to become '$expect'"
return 1
}
function wait_for_hugetlb_memory_to_get_depleted() {
local cgroup="$1"
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
# Wait for hugetlbfs memory to get depleted.
while [ $(cat $path) != 0 ]; do
echo Waiting for hugetlb memory to get depleted.
cat $path
sleep 0.5
done
wait_for_file_value "$path" "0"
}
function wait_for_hugetlb_memory_to_get_reserved() {
local cgroup="$1"
local size="$2"
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$reservation_usage_file"
# Wait for hugetlbfs memory to get written.
while [ $(cat $path) != $size ]; do
echo Waiting for hugetlb memory reservation to reach size $size.
cat $path
sleep 0.5
done
wait_for_file_value "$path" "$size"
}
function wait_for_hugetlb_memory_to_get_written() {
local cgroup="$1"
local size="$2"
local path="$cgroup_path/$cgroup/hugetlb.${MB}MB.$fault_usage_file"
# Wait for hugetlbfs memory to get written.
while [ $(cat $path) != $size ]; do
echo Waiting for hugetlb memory to reach size $size.
cat $path
sleep 0.5
done
wait_for_file_value "$path" "$size"
}
function write_hugetlbfs_and_get_usage() {
@@ -290,7 +299,7 @@ function run_test() {
setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"
mkdir -p /mnt/huge
mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
"$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -344,7 +353,7 @@ function run_multiple_cgroup_test() {
setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"
mkdir -p /mnt/huge
mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge
write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
"$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \

View File

@@ -16,8 +16,7 @@ echo "#include <sys/types.h>" > $tmpfile_c
echo "#include <liburing.h>" >> $tmpfile_c
echo "int func(void) { return 0; }" >> $tmpfile_c
CC=${1:?"Usage: $0 <compiler> # example compiler: gcc"}
$CC -c $tmpfile_c -o $tmpfile_o >/dev/null 2>&1
$CC $CFLAGS -c $tmpfile_c -o $tmpfile_o
if [ -f $tmpfile_o ]; then
echo "#define LOCAL_CONFIG_HAVE_LIBURING 1" > $OUTPUT_H_FILE

View File

@@ -75,6 +75,18 @@ static bool range_is_swapped(void *addr, size_t size)
return true;
}
static bool populate_page_checked(char *addr)
{
bool ret;
FORCE_READ(*addr);
ret = pagemap_is_populated(pagemap_fd, addr);
if (!ret)
ksft_print_msg("Failed to populate page\n");
return ret;
}
struct comm_pipes {
int child_ready[2];
int parent_ready[2];
@@ -1549,8 +1561,10 @@ static void run_with_zeropage(non_anon_test_fn fn, const char *desc)
}
/* Read from the page to populate the shared zeropage. */
FORCE_READ(*mem);
FORCE_READ(*smem);
if (!populate_page_checked(mem) || !populate_page_checked(smem)) {
log_test_result(KSFT_FAIL);
goto munmap;
}
fn(mem, smem, pagesize);
munmap:
@@ -1612,8 +1626,11 @@ static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc)
* the first sub-page and test if we get another sub-page populated
* automatically.
*/
FORCE_READ(mem);
FORCE_READ(smem);
if (!populate_page_checked(mem) || !populate_page_checked(smem)) {
log_test_result(KSFT_FAIL);
goto munmap;
}
if (!pagemap_is_populated(pagemap_fd, mem + pagesize) ||
!pagemap_is_populated(pagemap_fd, smem + pagesize)) {
ksft_test_result_skip("Did not get THPs populated\n");
@@ -1663,8 +1680,10 @@ static void run_with_memfd(non_anon_test_fn fn, const char *desc)
}
/* Fault the page in. */
FORCE_READ(mem);
FORCE_READ(smem);
if (!populate_page_checked(mem) || !populate_page_checked(smem)) {
log_test_result(KSFT_FAIL);
goto munmap;
}
fn(mem, smem, pagesize);
munmap:
@@ -1719,8 +1738,10 @@ static void run_with_tmpfile(non_anon_test_fn fn, const char *desc)
}
/* Fault the page in. */
FORCE_READ(mem);
FORCE_READ(smem);
if (!populate_page_checked(mem) || !populate_page_checked(smem)) {
log_test_result(KSFT_FAIL);
goto munmap;
}
fn(mem, smem, pagesize);
munmap:
@@ -1773,8 +1794,10 @@ static void run_with_memfd_hugetlb(non_anon_test_fn fn, const char *desc,
}
/* Fault the page in. */
FORCE_READ(mem);
FORCE_READ(smem);
if (!populate_page_checked(mem) || !populate_page_checked(smem)) {
log_test_result(KSFT_FAIL);
goto munmap;
}
fn(mem, smem, hugetlbsize);
munmap:

View File

@@ -47,14 +47,7 @@ void write_fault_pages(void *addr, unsigned long nr_pages)
void read_fault_pages(void *addr, unsigned long nr_pages)
{
unsigned long i;
for (i = 0; i < nr_pages; i++) {
unsigned long *addr2 =
((unsigned long *)(addr + (i * huge_page_size)));
/* Prevent the compiler from optimizing out the entire loop: */
FORCE_READ(*addr2);
}
force_read_pages(addr, nr_pages, huge_page_size);
}
int main(int argc, char **argv)

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t compaction

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t cow

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t gup_test

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t hmm

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t hugetlb

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t hugevm

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t ksm

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t ksm_numa

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t madv_guard

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t madv_populate

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mdwe

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t memfd_secret

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t migration

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mkdirty

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mlock

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mmap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mremap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t page_frag

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t pagemap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t pfnmap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t pkey

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t mmap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t process_mrelease

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t rmap

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t soft_dirty

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t thp

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t userfaultfd

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t vma_merge

View File

@@ -0,0 +1,4 @@
#!/bin/sh -e
# SPDX-License-Identifier: GPL-2.0
./run_vmtests.sh -t vmalloc

View File

@@ -1,5 +1,5 @@
PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
KDIR ?= /lib/modules/$(shell uname -r)/build
KDIR ?= $(if $(O),$(O),$(realpath ../../../../..))
ifeq ($(V),1)
Q =

View File

@@ -1052,11 +1052,10 @@ static void test_simple(void)
int sanity_tests(void)
{
unsigned long long mem_size, vec_size;
long ret, fd, i, buf_size;
long ret, fd, i, buf_size, nr_pages;
struct page_region *vec;
char *mem, *fmem;
struct stat sbuf;
char *tmp_buf;
/* 1. wrong operation */
mem_size = 10 * page_size;
@@ -1167,14 +1166,14 @@ int sanity_tests(void)
if (fmem == MAP_FAILED)
ksft_exit_fail_msg("error nomem %d %s\n", errno, strerror(errno));
tmp_buf = malloc(sbuf.st_size);
memcpy(tmp_buf, fmem, sbuf.st_size);
nr_pages = (sbuf.st_size + page_size - 1) / page_size;
force_read_pages(fmem, nr_pages, page_size);
ret = pagemap_ioctl(fmem, sbuf.st_size, vec, vec_size, 0, 0,
0, PAGEMAP_NON_WRITTEN_BITS, 0, PAGEMAP_NON_WRITTEN_BITS);
ksft_test_result(ret >= 0 && vec[0].start == (uintptr_t)fmem &&
LEN(vec[0]) == ceilf((float)sbuf.st_size/page_size) &&
LEN(vec[0]) == nr_pages &&
(vec[0].categories & PAGE_IS_FILE),
"%s Memory mapped file\n", __func__);
@@ -1553,7 +1552,7 @@ int main(int __attribute__((unused)) argc, char *argv[])
ksft_print_header();
if (init_uffd())
ksft_exit_pass();
ksft_exit_skip("Failed to initialize userfaultfd\n");
ksft_set_plan(117);
@@ -1562,7 +1561,7 @@ int main(int __attribute__((unused)) argc, char *argv[])
pagemap_fd = open(PAGEMAP, O_RDONLY);
if (pagemap_fd < 0)
return -EINVAL;
ksft_exit_fail_msg("Failed to open " PAGEMAP "\n");
/* 1. Sanity testing */
sanity_tests_sd();
@@ -1734,5 +1733,5 @@ int main(int __attribute__((unused)) argc, char *argv[])
zeropfn_tests();
close(pagemap_fd);
ksft_exit_pass();
ksft_finished();
}

View File

@@ -25,8 +25,12 @@
#include "kselftest_harness.h"
#include "vm_util.h"
#define DEV_MEM_NPAGES 2
static sigjmp_buf sigjmp_buf_env;
static char *file = "/dev/mem";
static off_t file_offset;
static int fd;
static void signal_handler(int sig)
{
@@ -35,18 +39,15 @@ static void signal_handler(int sig)
static int test_read_access(char *addr, size_t size, size_t pagesize)
{
size_t offs;
int ret;
if (signal(SIGSEGV, signal_handler) == SIG_ERR)
return -EINVAL;
ret = sigsetjmp(sigjmp_buf_env, 1);
if (!ret) {
for (offs = 0; offs < size; offs += pagesize)
/* Force a read that the compiler cannot optimize out. */
*((volatile char *)(addr + offs));
}
if (!ret)
force_read_pages(addr, size/pagesize, pagesize);
if (signal(SIGSEGV, SIG_DFL) == SIG_ERR)
return -EINVAL;
@@ -91,7 +92,7 @@ static int find_ram_target(off_t *offset,
break;
/* We need two pages. */
if (end > start + 2 * pagesize) {
if (end > start + DEV_MEM_NPAGES * pagesize) {
fclose(file);
*offset = start;
return 0;
@@ -100,11 +101,48 @@ static int find_ram_target(off_t *offset,
return -ENOENT;
}
static void pfnmap_init(void)
{
size_t pagesize = getpagesize();
size_t size = DEV_MEM_NPAGES * pagesize;
void *addr;
if (strncmp(file, "/dev/mem", strlen("/dev/mem")) == 0) {
int err = find_ram_target(&file_offset, pagesize);
if (err)
ksft_exit_skip("Cannot find ram target in '/proc/iomem': %s\n",
strerror(-err));
} else {
file_offset = 0;
}
fd = open(file, O_RDONLY);
if (fd < 0)
ksft_exit_skip("Cannot open '%s': %s\n", file, strerror(errno));
/*
* Make sure we can map the file, and perform some basic checks; skip
* the whole suite if anything goes wrong.
* A fresh mapping is then created for every test case by
* FIXTURE_SETUP(pfnmap).
*/
addr = mmap(NULL, size, PROT_READ, MAP_SHARED, fd, file_offset);
if (addr == MAP_FAILED)
ksft_exit_skip("Cannot mmap '%s': %s\n", file, strerror(errno));
if (!check_vmflag_pfnmap(addr))
ksft_exit_skip("Invalid file: '%s'. Not pfnmap'ed\n", file);
if (test_read_access(addr, size, pagesize))
ksft_exit_skip("Cannot read-access mmap'ed '%s'\n", file);
munmap(addr, size);
}
FIXTURE(pfnmap)
{
off_t offset;
size_t pagesize;
int dev_mem_fd;
char *addr1;
size_t size1;
char *addr2;
@@ -115,31 +153,10 @@ FIXTURE_SETUP(pfnmap)
{
self->pagesize = getpagesize();
if (strncmp(file, "/dev/mem", strlen("/dev/mem")) == 0) {
/* We'll require two physical pages throughout our tests ... */
if (find_ram_target(&self->offset, self->pagesize))
SKIP(return,
"Cannot find ram target in '/proc/iomem'\n");
} else {
self->offset = 0;
}
self->dev_mem_fd = open(file, O_RDONLY);
if (self->dev_mem_fd < 0)
SKIP(return, "Cannot open '%s'\n", file);
self->size1 = self->pagesize * 2;
self->size1 = DEV_MEM_NPAGES * self->pagesize;
self->addr1 = mmap(NULL, self->size1, PROT_READ, MAP_SHARED,
self->dev_mem_fd, self->offset);
if (self->addr1 == MAP_FAILED)
SKIP(return, "Cannot mmap '%s'\n", file);
if (!check_vmflag_pfnmap(self->addr1))
SKIP(return, "Invalid file: '%s'. Not pfnmap'ed\n", file);
/* ... and want to be able to read from them. */
if (test_read_access(self->addr1, self->size1, self->pagesize))
SKIP(return, "Cannot read-access mmap'ed '%s'\n", file);
fd, file_offset);
ASSERT_NE(self->addr1, MAP_FAILED);
self->size2 = 0;
self->addr2 = MAP_FAILED;
@@ -151,8 +168,6 @@ FIXTURE_TEARDOWN(pfnmap)
munmap(self->addr2, self->size2);
if (self->addr1 != MAP_FAILED)
munmap(self->addr1, self->size1);
if (self->dev_mem_fd >= 0)
close(self->dev_mem_fd);
}
TEST_F(pfnmap, madvise_disallowed)
@@ -192,7 +207,7 @@ TEST_F(pfnmap, munmap_split)
*/
self->size2 = self->pagesize;
self->addr2 = mmap(NULL, self->pagesize, PROT_READ, MAP_SHARED,
self->dev_mem_fd, self->offset);
fd, file_offset);
ASSERT_NE(self->addr2, MAP_FAILED);
}
@@ -262,8 +277,12 @@ int main(int argc, char **argv)
if (strcmp(argv[i], "--") == 0) {
if (i + 1 < argc && strlen(argv[i + 1]) > 0)
file = argv[i + 1];
return test_harness_run(i, argv);
argc = i;
break;
}
}
pfnmap_init();
return test_harness_run(argc, argv);
}

View File

@@ -2,6 +2,10 @@
# SPDX-License-Identifier: GPL-2.0
# Please run as root
# IMPORTANT: If you add a new test CATEGORY please add a simple wrapper
# script so kunit knows to run it, and add it to the list below.
# If you do not YOUR TESTS WILL NOT RUN IN THE CI.
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
@@ -399,28 +403,8 @@ CATEGORY="hugetlb" run_test ./hugetlb-read-hwpoison
fi
if [ $VADDR64 -ne 0 ]; then
# set overcommit_policy as OVERCOMMIT_ALWAYS so that kernel
# allows high virtual address allocation requests independent
# of platform's physical memory.
if [ -x ./virtual_address_range ]; then
prev_policy=$(cat /proc/sys/vm/overcommit_memory)
echo 1 > /proc/sys/vm/overcommit_memory
CATEGORY="hugevm" run_test ./virtual_address_range
echo $prev_policy > /proc/sys/vm/overcommit_memory
fi
# va high address boundary switch test
ARCH_ARM64="arm64"
prev_nr_hugepages=$(cat /proc/sys/vm/nr_hugepages)
if [ "$ARCH" == "$ARCH_ARM64" ]; then
echo 6 > /proc/sys/vm/nr_hugepages
fi
CATEGORY="hugevm" run_test bash ./va_high_addr_switch.sh
if [ "$ARCH" == "$ARCH_ARM64" ]; then
echo $prev_nr_hugepages > /proc/sys/vm/nr_hugepages
fi
fi # VADDR64
# vmalloc stability smoke test

View File

@@ -652,11 +652,7 @@ static int create_pagecache_thp_and_fd(const char *testfile, size_t fd_size,
}
madvise(*addr, fd_size, MADV_HUGEPAGE);
for (size_t i = 0; i < fd_size; i++) {
char *addr2 = *addr + i;
FORCE_READ(*addr2);
}
force_read_pages(*addr, fd_size / pmd_pagesize, pmd_pagesize);
if (!check_huge_file(*addr, fd_size / pmd_pagesize, pmd_pagesize)) {
ksft_print_msg("No large pagecache folio generated, please provide a filesystem supporting large folio\n");

View File

@@ -13,6 +13,9 @@ TEST_NAME="vmalloc"
DRIVER="test_${TEST_NAME}"
NUM_CPUS=`grep -c ^processor /proc/cpuinfo`
# Default number of times we allocate percpu objects:
NR_PCPU_OBJECTS=35000
# 1 if fails
exitcode=1
@@ -27,6 +30,8 @@ PERF_PARAM="sequential_test_order=1 test_repeat_count=3"
SMOKE_PARAM="test_loop_count=10000 test_repeat_count=10"
STRESS_PARAM="nr_threads=$NUM_CPUS test_repeat_count=20"
PCPU_OBJ_PARAM="nr_pcpu_objects=$NR_PCPU_OBJECTS"
check_test_requirements()
{
uid=$(id -u)
@@ -47,12 +52,30 @@ check_test_requirements()
fi
}
check_memory_requirement()
{
# The pcpu_alloc_test allocates nr_pcpu_objects per cpu. If the
# PAGE_SIZE is on the larger side it is easier to set a value
# that can cause oom events during testing. Since we are
# testing the functionality of vmalloc and not the oom-killer,
# calculate what is 90% of available memory and divide it by
# the number of online CPUs.
pages=$(($(getconf _AVPHYS_PAGES) * 90 / 100 / $NUM_CPUS))
if (($pages < $NR_PCPU_OBJECTS)); then
echo "Updated nr_pcpu_objects to 90% of available memory."
echo "nr_pcpu_objects is now set to: $pages."
PCPU_OBJ_PARAM="nr_pcpu_objects=$pages"
fi
}
run_performance_check()
{
echo "Run performance tests to evaluate how fast vmalloc allocation is."
echo "It runs all test cases on one single CPU with sequential order."
modprobe $DRIVER $PERF_PARAM > /dev/null 2>&1
check_memory_requirement
modprobe $DRIVER $PERF_PARAM $PCPU_OBJ_PARAM > /dev/null 2>&1
echo "Done."
echo "Check the kernel message buffer to see the summary."
}
@@ -63,7 +86,8 @@ run_stability_check()
echo "available test cases are run by NUM_CPUS workers simultaneously."
echo "It will take time, so be patient."
modprobe $DRIVER $STRESS_PARAM > /dev/null 2>&1
check_memory_requirement
modprobe $DRIVER $STRESS_PARAM $PCPU_OBJ_PARAM > /dev/null 2>&1
echo "Done."
echo "Check the kernel ring buffer to see the summary."
}
@@ -74,7 +98,8 @@ run_smoke_check()
echo "Please check $0 output how it can be used"
echo "for deep performance analysis as well as stress testing."
modprobe $DRIVER $SMOKE_PARAM > /dev/null 2>&1
check_memory_requirement
modprobe $DRIVER $SMOKE_PARAM $PCPU_OBJ_PARAM > /dev/null 2>&1
echo "Done."
echo "Check the kernel ring buffer to see the summary."
}

View File

@@ -322,7 +322,7 @@ static int supported_arch(void)
int main(int argc, char **argv)
{
int ret;
int ret, hugetlb_ret = KSFT_PASS;
if (!supported_arch())
return KSFT_SKIP;
@@ -331,6 +331,10 @@ int main(int argc, char **argv)
ret = run_test(testcases, sz_testcases);
if (argc == 2 && !strcmp(argv[1], "--run-hugetlb"))
ret = run_test(hugetlb_testcases, sz_hugetlb_testcases);
return ret;
hugetlb_ret = run_test(hugetlb_testcases, sz_hugetlb_testcases);
if (ret == KSFT_PASS && hugetlb_ret == KSFT_PASS)
return KSFT_PASS;
else
return KSFT_FAIL;
}

View File

@@ -61,9 +61,9 @@ check_supported_ppc64()
check_test_requirements()
{
# The test supports x86_64 and powerpc64. We currently have no useful
# eligibility check for powerpc64, and the test itself will reject other
# architectures.
# The test supports x86_64, powerpc64 and arm64. There's check for arm64
# in va_high_addr_switch.c. The test itself will reject other architectures.
case `uname -m` in
"x86_64")
check_supported_x86_64
@@ -111,7 +111,9 @@ setup_nr_hugepages()
check_test_requirements
save_nr_hugepages
# 4 keep_mapped pages, and one for tmp usage
setup_nr_hugepages 5
# The HugeTLB tests require 6 pages
setup_nr_hugepages 6
./va_high_addr_switch --run-hugetlb
retcode=$?
restore_nr_hugepages
exit $retcode

View File

@@ -1,260 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright 2017, Anshuman Khandual, IBM Corp.
*
* Works on architectures which support 128TB virtual
* address range and beyond.
*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <errno.h>
#include <sys/prctl.h>
#include <sys/mman.h>
#include <sys/time.h>
#include <fcntl.h>
#include "vm_util.h"
#include "kselftest.h"
/*
* Maximum address range mapped with a single mmap()
* call is little bit more than 1GB. Hence 1GB is
* chosen as the single chunk size for address space
* mapping.
*/
#define SZ_1GB (1024 * 1024 * 1024UL)
#define SZ_1TB (1024 * 1024 * 1024 * 1024UL)
#define MAP_CHUNK_SIZE SZ_1GB
/*
* Address space till 128TB is mapped without any hint
* and is enabled by default. Address space beyond 128TB
* till 512TB is obtained by passing hint address as the
* first argument into mmap() system call.
*
* The process heap address space is divided into two
* different areas one below 128TB and one above 128TB
* till it reaches 512TB. One with size 128TB and the
* other being 384TB.
*
* On Arm64 the address space is 256TB and support for
* high mappings up to 4PB virtual address space has
* been added.
*
* On PowerPC64, the address space up to 128TB can be
* mapped without a hint. Addresses beyond 128TB, up to
* 4PB, can be mapped with a hint.
*
*/
#define NR_CHUNKS_128TB ((128 * SZ_1TB) / MAP_CHUNK_SIZE) /* Number of chunks for 128TB */
#define NR_CHUNKS_256TB (NR_CHUNKS_128TB * 2UL)
#define NR_CHUNKS_384TB (NR_CHUNKS_128TB * 3UL)
#define NR_CHUNKS_3840TB (NR_CHUNKS_128TB * 30UL)
#define NR_CHUNKS_3968TB (NR_CHUNKS_128TB * 31UL)
#define ADDR_MARK_128TB (1UL << 47) /* First address beyond 128TB */
#define ADDR_MARK_256TB (1UL << 48) /* First address beyond 256TB */
#ifdef __aarch64__
#define HIGH_ADDR_MARK ADDR_MARK_256TB
#define HIGH_ADDR_SHIFT 49
#define NR_CHUNKS_LOW NR_CHUNKS_256TB
#define NR_CHUNKS_HIGH NR_CHUNKS_3840TB
#elif defined(__PPC64__)
#define HIGH_ADDR_MARK ADDR_MARK_128TB
#define HIGH_ADDR_SHIFT 48
#define NR_CHUNKS_LOW NR_CHUNKS_128TB
#define NR_CHUNKS_HIGH NR_CHUNKS_3968TB
#else
#define HIGH_ADDR_MARK ADDR_MARK_128TB
#define HIGH_ADDR_SHIFT 48
#define NR_CHUNKS_LOW NR_CHUNKS_128TB
#define NR_CHUNKS_HIGH NR_CHUNKS_384TB
#endif
static char *hint_addr(void)
{
int bits = HIGH_ADDR_SHIFT + rand() % (63 - HIGH_ADDR_SHIFT);
return (char *) (1UL << bits);
}
static void validate_addr(char *ptr, int high_addr)
{
unsigned long addr = (unsigned long) ptr;
if (high_addr) {
if (addr < HIGH_ADDR_MARK)
ksft_exit_fail_msg("Bad address %lx\n", addr);
return;
}
if (addr > HIGH_ADDR_MARK)
ksft_exit_fail_msg("Bad address %lx\n", addr);
}
static void mark_range(char *ptr, size_t size)
{
if (prctl(PR_SET_VMA, PR_SET_VMA_ANON_NAME, ptr, size, "virtual_address_range") == -1) {
if (errno == EINVAL) {
/* Depends on CONFIG_ANON_VMA_NAME */
ksft_test_result_skip("prctl(PR_SET_VMA_ANON_NAME) not supported\n");
ksft_finished();
} else {
ksft_exit_fail_perror("prctl(PR_SET_VMA_ANON_NAME) failed\n");
}
}
}
static int is_marked_vma(const char *vma_name)
{
return vma_name && !strcmp(vma_name, "[anon:virtual_address_range]\n");
}
static int validate_lower_address_hint(void)
{
char *ptr;
ptr = mmap((void *) (1UL << 45), MAP_CHUNK_SIZE, PROT_READ |
PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (ptr == MAP_FAILED)
return 0;
return 1;
}
static int validate_complete_va_space(void)
{
unsigned long start_addr, end_addr, prev_end_addr;
char line[400];
char prot[6];
FILE *file;
int fd;
fd = open("va_dump", O_CREAT | O_WRONLY, 0600);
unlink("va_dump");
if (fd < 0) {
ksft_test_result_skip("cannot create or open dump file\n");
ksft_finished();
}
file = fopen("/proc/self/maps", "r");
if (file == NULL)
ksft_exit_fail_msg("cannot open /proc/self/maps\n");
prev_end_addr = 0;
while (fgets(line, sizeof(line), file)) {
const char *vma_name = NULL;
int vma_name_start = 0;
unsigned long hop;
if (sscanf(line, "%lx-%lx %4s %*s %*s %*s %n",
&start_addr, &end_addr, prot, &vma_name_start) != 3)
ksft_exit_fail_msg("cannot parse /proc/self/maps\n");
if (vma_name_start)
vma_name = line + vma_name_start;
/* end of userspace mappings; ignore vsyscall mapping */
if (start_addr & (1UL << 63))
return 0;
/* /proc/self/maps must have gaps less than MAP_CHUNK_SIZE */
if (start_addr - prev_end_addr >= MAP_CHUNK_SIZE)
return 1;
prev_end_addr = end_addr;
if (prot[0] != 'r')
continue;
if (check_vmflag_io((void *)start_addr))
continue;
/*
* Confirm whether MAP_CHUNK_SIZE chunk can be found or not.
* If write succeeds, no need to check MAP_CHUNK_SIZE - 1
* addresses after that. If the address was not held by this
* process, write would fail with errno set to EFAULT.
* Anyways, if write returns anything apart from 1, exit the
* program since that would mean a bug in /proc/self/maps.
*/
hop = 0;
while (start_addr + hop < end_addr) {
if (write(fd, (void *)(start_addr + hop), 1) != 1)
return 1;
lseek(fd, 0, SEEK_SET);
if (is_marked_vma(vma_name))
munmap((char *)(start_addr + hop), MAP_CHUNK_SIZE);
hop += MAP_CHUNK_SIZE;
}
}
return 0;
}
int main(int argc, char *argv[])
{
char *ptr[NR_CHUNKS_LOW];
char **hptr;
char *hint;
unsigned long i, lchunks, hchunks;
ksft_print_header();
ksft_set_plan(1);
for (i = 0; i < NR_CHUNKS_LOW; i++) {
ptr[i] = mmap(NULL, MAP_CHUNK_SIZE, PROT_READ,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (ptr[i] == MAP_FAILED) {
if (validate_lower_address_hint())
ksft_exit_fail_msg("mmap unexpectedly succeeded with hint\n");
break;
}
mark_range(ptr[i], MAP_CHUNK_SIZE);
validate_addr(ptr[i], 0);
}
lchunks = i;
hptr = (char **) calloc(NR_CHUNKS_HIGH, sizeof(char *));
if (hptr == NULL) {
ksft_test_result_skip("Memory constraint not fulfilled\n");
ksft_finished();
}
for (i = 0; i < NR_CHUNKS_HIGH; i++) {
hint = hint_addr();
hptr[i] = mmap(hint, MAP_CHUNK_SIZE, PROT_READ,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (hptr[i] == MAP_FAILED)
break;
mark_range(hptr[i], MAP_CHUNK_SIZE);
validate_addr(hptr[i], 1);
}
hchunks = i;
if (validate_complete_va_space()) {
ksft_test_result_fail("BUG in mmap() or /proc/self/maps\n");
ksft_finished();
}
for (i = 0; i < lchunks; i++)
munmap(ptr[i], MAP_CHUNK_SIZE);
for (i = 0; i < hchunks; i++)
munmap(hptr[i], MAP_CHUNK_SIZE);
free(hptr);
ksft_test_result_pass("Test\n");
ksft_finished();
}

View File

@@ -54,6 +54,13 @@ static inline unsigned int pshift(void)
return __page_shift;
}
static inline void force_read_pages(char *addr, unsigned int nr_pages,
size_t pagesize)
{
for (unsigned int i = 0; i < nr_pages; i++)
FORCE_READ(addr[i * pagesize]);
}
bool detect_huge_zeropage(void);
/*

View File

@@ -68,7 +68,7 @@ int main(int argc, char **argv)
int key = 0;
int *ptr = NULL;
int c = 0;
int size = 0;
size_t size = 0;
char path[256] = "";
enum method method = MAX_METHOD;
int want_sleep = 0, private = 0;
@@ -86,7 +86,10 @@ int main(int argc, char **argv)
while ((c = getopt(argc, argv, "s:p:m:owlrn")) != -1) {
switch (c) {
case 's':
size = atoi(optarg);
if (sscanf(optarg, "%zu", &size) != 1) {
perror("Invalid -s.");
exit_usage();
}
break;
case 'p':
strncpy(path, optarg, sizeof(path) - 1);
@@ -131,7 +134,7 @@ int main(int argc, char **argv)
}
if (size != 0) {
printf("Writing this size: %d\n", size);
printf("Writing this size: %zu\n", size);
} else {
errno = EINVAL;
perror("size not found");

View File

@@ -600,6 +600,14 @@ struct mmap_action {
bool hide_from_rmap_until_complete :1;
};
/* Operations which modify VMAs. */
enum vma_operation {
VMA_OP_SPLIT,
VMA_OP_MERGE_UNFAULTED,
VMA_OP_REMAP,
VMA_OP_FORK,
};
/*
* Describes a VMA that is about to be mmap()'ed. Drivers may choose to
* manipulate mutable fields which will cause those fields to be updated in the
@@ -1157,7 +1165,8 @@ static inline int vma_dup_policy(struct vm_area_struct *src, struct vm_area_stru
return 0;
}
static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
static inline int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src,
enum vma_operation operation)
{
/* For testing purposes. We indicate that an anon_vma has been cloned. */
if (src->anon_vma != NULL) {
@@ -1265,11 +1274,6 @@ static inline void i_mmap_unlock_write(struct address_space *mapping)
{
}
static inline void anon_vma_merge(struct vm_area_struct *vma,
struct vm_area_struct *next)
{
}
static inline int userfaultfd_unmap_prep(struct vm_area_struct *vma,
unsigned long start,
unsigned long end,