mm/page_alloc: batch the accounting updates in the bulk allocator

Now that the zone_statistics are simple counters that do not require
special protection, the bulk allocator accounting updates can be batch
updated without adding too much complexity with protected RMW updates or
using xchg.

Link: https://lkml.kernel.org/r/20210512095458.30632-6-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Mel Gorman
2021-06-28 19:41:50 -07:00
committed by Linus Torvalds
parent 3ac44a346a
commit 3e23060b2d
2 changed files with 21 additions and 17 deletions

View File

@@ -247,6 +247,14 @@ __count_numa_event(struct zone *zone, enum numa_stat_item item)
raw_cpu_inc(pzstats->vm_numa_event[item]);
}
static inline void
__count_numa_events(struct zone *zone, enum numa_stat_item item, long delta)
{
struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats;
raw_cpu_add(pzstats->vm_numa_event[item], delta);
}
extern unsigned long sum_zone_node_page_state(int node,
enum zone_stat_item item);
extern unsigned long sum_zone_numa_event_state(int node, enum numa_stat_item item);