From e2d17b6af42255590420908077d0cd06b197917c Mon Sep 17 00:00:00 2001 From: CMY Date: Fri, 11 Jul 2014 15:13:38 +0800 Subject: [PATCH] rk: mm: CMA memory busy error may have a variety of reasons, in the larger one is: the pages being used in the system come from CMA, and now need recycling them for new CMA allocation, it need to allocate a new page for storing data that will be reclaim CMA's pages, but new page may also be come from CMA memory when memory allocate fall back to MIGRATE_CMA freelist. Now we protect the CMA's pages in the memory fallback allocate [ 1637.058550] alloc_contig_range test_pages_isolated(431a0, 431c0) failed --- mm/page_alloc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4cfdc64482c8..699bd7742877 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1050,6 +1050,15 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype) page = list_entry(area->free_list[migratetype].next, struct page, lru); + +#ifdef CONFIG_ARCH_ROCKCHIP + if (is_migrate_cma(migratetype)){ + int mt = get_pageblock_migratetype(page); + if (unlikely(is_migrate_isolate(mt))) + continue; + } +#endif + area->nr_free--; /* -- 2.34.1