sched: use maple tree iterator to walk VMAs

JIRA: https://issues.redhat.com/browse/RHEL-27736

commit 0cd4d02c32123afc25647f1d7123bc13b51ac56b
Author: Matthew Wilcox (Oracle) <willy@infradead.org>
Date:   Tue Sep 6 19:48:59 2022 +0000

    sched: use maple tree iterator to walk VMAs

    The linked list is slower than walking the VMAs using the maple tree.  We
    can't use the VMA iterator here because it doesn't support moving to an
    earlier position.

    Link: https://lkml.kernel.org/r/20220906194824.2110408-49-Liam.Howlett@oracle.com
    Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Tested-by: Yu Zhao <yuzhao@google.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: David Howells <dhowells@redhat.com>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: SeongJae Park <sj@kernel.org>
    Cc: Sven Schnelle <svens@linux.ibm.com>
    Cc: Will Deacon <will@kernel.org>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Signed-off-by: Chris von Recklinghausen <crecklin@redhat.com>
This commit is contained in:
Chris von Recklinghausen 2024-04-01 11:19:52 -04:00
parent c151e5e043
commit 0cbf1c9647
1 changed files with 7 additions and 3 deletions

View File

@ -2915,6 +2915,7 @@ static void task_numa_work(struct callback_head *work)
struct task_struct *p = current;
struct mm_struct *mm = p->mm;
u64 runtime = p->se.sum_exec_runtime;
MA_STATE(mas, &mm->mm_mt, 0, 0);
struct vm_area_struct *vma;
unsigned long start, end;
unsigned long nr_pte_updates = 0;
@ -2971,13 +2972,16 @@ static void task_numa_work(struct callback_head *work)
if (!mmap_read_trylock(mm))
return;
vma = find_vma(mm, start);
mas_set(&mas, start);
vma = mas_find(&mas, ULONG_MAX);
if (!vma) {
reset_ptenuma_scan(p);
start = 0;
vma = mm->mmap;
mas_set(&mas, start);
vma = mas_find(&mas, ULONG_MAX);
}
for (; vma; vma = vma->vm_next) {
for (; vma; vma = mas_find(&mas, ULONG_MAX)) {
if (!vma_migratable(vma) || !vma_policy_mof(vma) ||
is_vm_hugetlb_page(vma) || (vma->vm_flags & VM_MIXEDMAP)) {
continue;