diff options
author | Minchan Kim <minchan@kernel.org> | 2016-11-24 08:03:20 +1100 |
---|---|---|
committer | Tao Huang <huangtao@rock-chips.com> | 2019-02-20 14:56:31 +0800 |
commit | 7df0273f2c3335a83bd62093aa938979e1903dd9 (patch) | |
tree | cf572d1d4df6ef27837c575d9606c5855495f197 /mm | |
parent | 79b42a1ce7b7b738d048da00800745653c27a1af (diff) |
BACKPORT: mm: make unreserve highatomic functions reliable
Currently, unreserve_highatomic_pageblock bails out if it found highatomic
pageblock regardless of really moving free pages from the one so that it
could mitigate unreserve logic's goal which saves OOM of a process.
This patch makes unreserve functions bail out only if it moves some pages
out of !highatomic free list to avoid such false positive.
Another potential problem is that by race between page freeing and reserve
highatomic function, pages could be in highatomic free list even though
the pageblock is !high atomic migratetype. In that case,
unreserve_highatomic_pageblock can be void if count of highatomic reserve
is less than pageblock_nr_pages. We could solve it simply via draining
all of reserved pages before the OOM. It would have a safeguard role to
exhuast reserved pages before converging to OOM.
BUG=chrome-os-partner:60028
TEST=for i in $(seq 100); do ./launchBalloons.sh 6 700 30 >/dev/null; done
Conflicts:
mm/page_alloc.c
...this conflict resolution is trivial based on the conflict
resolution that was done as part of ("mm: try to exhaust highatomic
reserve before the OOM")
Link: http://lkml.kernel.org/r/1476259429-18279-5-git-send-email-minchan@kernel.org
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sangseok Lee <sangseok.lee@lge.com>
Cc: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Douglas Anderson <dianders@chromium.org>
(cherry picked from akpm via linuxnext
commit df6e3cc2c9168bdbf3abecec2821a6f9ae1a2128)
Reviewed-on: https://chromium-review.googlesource.com/414640
Reviewed-by: Guenter Roeck <groeck@chromium.org>
Change-Id: Ib3e9764c0aaa3b43e3afd05192a8c43e225adb81
Signed-off-by: Jeffy Chen <jeffy.chen@rock-chips.com>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/page_alloc.c | 24 |
1 files changed, 17 insertions, 7 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3c7afca291a4..6bd339ca6faf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1741,8 +1741,12 @@ out_unlock: * potentially hurts the reliability of high-order allocations when under * intense memory pressure but failed atomic allocations should be easier * to recover from than an OOM. + * + * If @force is true, try to unreserve a pageblock even though highatomic + * pageblock is exhausted. */ -static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) +static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, + bool force) { struct zonelist *zonelist = ac->zonelist; unsigned long flags; @@ -1754,8 +1758,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) for_each_zone_zonelist_nodemask(zone, z, zonelist, ac->high_zoneidx, ac->nodemask) { - /* Preserve at least one pageblock */ - if (zone->nr_reserved_highatomic <= pageblock_nr_pages) + /* + * Preserve at least one pageblock unless memory pressure + * is really high. + */ + if (!force && zone->nr_reserved_highatomic <= + pageblock_nr_pages) continue; spin_lock_irqsave(&zone->lock, flags); @@ -1800,8 +1808,10 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac) */ set_pageblock_migratetype(page, ac->migratetype); ret = move_freepages_block(zone, page, ac->migratetype); - spin_unlock_irqrestore(&zone->lock, flags); - return ret; + if (ret) { + spin_unlock_irqrestore(&zone->lock, flags); + return ret; + } } spin_unlock_irqrestore(&zone->lock, flags); } @@ -2945,7 +2955,7 @@ retry: * Shrink them them and try again */ if (!page && !drained) { - unreserve_highatomic_pageblock(ac); + unreserve_highatomic_pageblock(ac, false); drain_all_pages(NULL); drained = true; goto retry; @@ -3215,7 +3225,7 @@ retry: } /* Before OOM, exhaust highatomic_reserve */ - if (unreserve_highatomic_pageblock(ac)) + if (unreserve_highatomic_pageblock(ac, true)) goto retry; /* Reclaim has failed us, start killing things */ |