* [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 14:48 ` kernel test robot
2021-11-22 21:10 ` kernel test robot
2021-11-22 8:48 ` [PATCH 2/8] powerpc/mm: Remove CONFIG_PPC_MM_SLICES Christophe Leroy
` (7 subsequent siblings)
8 siblings, 2 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Since commit 555904d07eef ("powerpc/8xx: MM_SLICE is not needed
anymore") only book3s/64 selects CONFIG_PPC_MM_SLICES.
Move slice.c into mm/book3s64/
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/mm/Makefile | 1 -
arch/powerpc/mm/book3s64/Makefile | 1 +
arch/powerpc/mm/{ => book3s64}/slice.c | 0
arch/powerpc/mm/nohash/mmu_context.c | 2 --
arch/powerpc/mm/nohash/tlb.c | 4 ----
5 files changed, 1 insertion(+), 7 deletions(-)
rename arch/powerpc/mm/{ => book3s64}/slice.c (100%)
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index df8172da2301..d4c20484dad9 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -14,7 +14,6 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += nohash/
obj-$(CONFIG_PPC_BOOK3S_32) += book3s32/
obj-$(CONFIG_PPC_BOOK3S_64) += book3s64/
obj-$(CONFIG_NUMA) += numa.o
-obj-$(CONFIG_PPC_MM_SLICES) += slice.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
obj-$(CONFIG_PPC_COPRO_BASE) += copro_fault.o
diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 1b56d3af47d4..30951668c684 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -18,6 +18,7 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
obj-$(CONFIG_PPC_SUBPAGE_PROT) += subpage_prot.o
obj-$(CONFIG_SPAPR_TCE_IOMMU) += iommu_api.o
obj-$(CONFIG_PPC_PKEY) += pkeys.o
+obj-$(CONFIG_PPC_MM_SLICES) += slice.o
# Instrumenting the SLB fault path can lead to duplicate SLB entries
KCOV_INSTRUMENT_slb.o := n
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/book3s64/slice.c
similarity index 100%
rename from arch/powerpc/mm/slice.c
rename to arch/powerpc/mm/book3s64/slice.c
diff --git a/arch/powerpc/mm/nohash/mmu_context.c b/arch/powerpc/mm/nohash/mmu_context.c
index 44b2b5e7cabe..b8dfe66bdf18 100644
--- a/arch/powerpc/mm/nohash/mmu_context.c
+++ b/arch/powerpc/mm/nohash/mmu_context.c
@@ -320,8 +320,6 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
* have id == 0) and don't alter context slice inherited via fork (which
* will have id != 0).
*/
- if (mm->context.id == 0)
- slice_init_new_context_exec(mm);
mm->context.id = MMU_NO_CONTEXT;
mm->context.active = 0;
pte_frag_set(&mm->context, NULL);
diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c
index 89353d4f5604..4822dfd6c246 100644
--- a/arch/powerpc/mm/nohash/tlb.c
+++ b/arch/powerpc/mm/nohash/tlb.c
@@ -782,9 +782,5 @@ void __init early_init_mmu(void)
#ifdef CONFIG_PPC_47x
early_init_mmu_47x();
#endif
-
-#ifdef CONFIG_PPC_MM_SLICES
- mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT);
-#endif
}
#endif /* CONFIG_PPC64 */
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-22 8:48 ` [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64 Christophe Leroy
@ 2021-11-22 14:48 ` kernel test robot
2021-11-24 12:10 ` Christophe Leroy
2021-11-22 21:10 ` kernel test robot
1 sibling, 1 reply; 22+ messages in thread
From: kernel test robot @ 2021-11-22 14:48 UTC (permalink / raw)
To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: kbuild-all, linux-mm, linuxppc-dev, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 25236 bytes --]
Hi Christophe,
I love your patch! Perhaps something to improve:
[auto build test WARNING on powerpc/next]
[also build test WARNING on hnaz-mm/master linus/master v5.16-rc2 next-20211118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc64-randconfig-s031-20211122 (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 11.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.4-dirty
# https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=powerpc64
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area':
>> arch/powerpc/mm/book3s64/slice.c:639:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
639 | }
| ^
vim +639 arch/powerpc/mm/book3s64/slice.c
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 428
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 429 unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 430 unsigned long flags, unsigned int psize,
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 431 int topdown)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 432 {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 433 struct slice_mask good_mask;
f3207c124e7aa8 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-22 434 struct slice_mask potential_mask;
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 435 const struct slice_mask *maskp;
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 436 const struct slice_mask *compat_maskp = NULL;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 437 int fixed = (flags & MAP_FIXED);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 438 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 439 unsigned long page_size = 1UL << pshift;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 440 struct mm_struct *mm = current->mm;
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 441 unsigned long newaddr;
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 442 unsigned long high_limit;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 443
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 444 high_limit = DEFAULT_MAP_WINDOW;
35602f82d0c765 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 445 if (addr >= high_limit || (fixed && (addr + len > high_limit)))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 446 high_limit = TASK_SIZE;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 447
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 448 if (len > high_limit)
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 449 return -ENOMEM;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 450 if (len & (page_size - 1))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 451 return -EINVAL;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 452 if (fixed) {
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 453 if (addr & (page_size - 1))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 454 return -EINVAL;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 455 if (addr > high_limit - len)
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 456 return -ENOMEM;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 457 }
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 458
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 459 if (high_limit > mm_ctx_slb_addr_limit(&mm->context)) {
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 460 /*
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 461 * Increasing the slb_addr_limit does not require
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 462 * slice mask cache to be recalculated because it should
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 463 * be already initialised beyond the old address limit.
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 464 */
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 465 mm_ctx_set_slb_addr_limit(&mm->context, high_limit);
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 466
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 467 on_each_cpu(slice_flush_segments, mm, 1);
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 468 }
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 469
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 470 /* Sanity checks */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 471 BUG_ON(mm->task_size == 0);
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 472 BUG_ON(mm_ctx_slb_addr_limit(&mm->context) == 0);
764041e0f43cc7 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2016-04-29 473 VM_BUG_ON(radix_enabled());
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 474
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 475 slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 476 slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 477 addr, len, flags, topdown);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 478
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 479 /* If hint, make sure it matches our alignment restrictions */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 480 if (!fixed && addr) {
b711531641038f arch/powerpc/mm/slice.c Christophe Leroy 2020-04-20 481 addr = ALIGN(addr, page_size);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 482 slice_dbg(" aligned addr=%lx\n", addr);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 483 /* Ignore hint if it's too large or overlaps a VMA */
3b4d07d2674f6b arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-02-26 484 if (addr > high_limit - len || addr < mmap_min_addr ||
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 485 !slice_area_is_free(mm, addr, len))
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 486 addr = 0;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 487 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 488
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 489 /* First make up a "good" mask of slices that have the right size
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 490 * already
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 491 */
6f60cc98df2be7 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 492 maskp = slice_mask_for_size(&mm->context, psize);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 493
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 494 /*
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 495 * Here "good" means slices that are already the right page size,
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 496 * "compat" means slices that have a compatible page size (i.e.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 497 * 4k in a 64k pagesize kernel), and "free" means slices without
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 498 * any VMAs.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 499 *
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 500 * If MAP_FIXED:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 501 * check if fits in good | compat => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 502 * check if fits in good | compat | free => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 503 * else bad
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 504 * If have hint:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 505 * check if hint fits in good => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 506 * check if hint fits in good | free => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 507 * Otherwise:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 508 * search in good, found => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 509 * search in good | free, found => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 510 * search in good | compat | free, found => convert free.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 511 */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 512
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 513 /*
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 514 * If we support combo pages, we can allow 64k pages in 4k slices
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 515 * The mask copies could be avoided in most cases here if we had
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 516 * a pointer to good mask for the next code to use.
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 517 */
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 518 if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
6f60cc98df2be7 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 519 compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 520 if (fixed)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 521 slice_or_mask(&good_mask, maskp, compat_maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 522 else
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 523 slice_copy_mask(&good_mask, maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 524 } else {
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 525 slice_copy_mask(&good_mask, maskp);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 526 }
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 527
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 528 slice_print_mask(" good_mask", &good_mask);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 529 if (compat_maskp)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 530 slice_print_mask(" compat_mask", compat_maskp);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 531
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 532 /* First check hint if it's valid or if we have MAP_FIXED */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 533 if (addr != 0 || fixed) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 534 /* Check if we fit in the good mask. If we do, we just return,
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 535 * nothing else to do
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 536 */
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 537 if (slice_check_range_fits(mm, &good_mask, addr, len)) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 538 slice_dbg(" fits good !\n");
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 539 newaddr = addr;
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 540 goto return_addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 541 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 542 } else {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 543 /* Now let's see if we can find something in the existing
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 544 * slices for that size
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 545 */
830fd2d45aa116 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 546 newaddr = slice_find_area(mm, len, &good_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 547 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 548 if (newaddr != -ENOMEM) {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 549 /* Found within the good mask, we don't have to setup,
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 550 * we thus return directly
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 551 */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 552 slice_dbg(" found area at 0x%lx\n", newaddr);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 553 goto return_addr;
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 554 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 555 }
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 556 /*
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 557 * We don't fit in the good mask, check what other slices are
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 558 * empty and thus can be converted
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 559 */
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 560 slice_mask_for_free(mm, &potential_mask, high_limit);
b8c93549142077 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 561 slice_or_mask(&potential_mask, &potential_mask, &good_mask);
830fd2d45aa116 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 562 slice_print_mask(" potential", &potential_mask);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 563
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 564 if (addr != 0 || fixed) {
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 565 if (slice_check_range_fits(mm, &potential_mask, addr, len)) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 566 slice_dbg(" fits potential !\n");
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 567 newaddr = addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 568 goto convert;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 569 }
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 570 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 571
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 572 /* If we have MAP_FIXED and failed the above steps, then error out */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 573 if (fixed)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 574 return -EBUSY;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 575
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 576 slice_dbg(" search...\n");
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 577
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 578 /* If we had a hint that didn't work out, see if we can fit
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 579 * anywhere in the good area.
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 580 */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 581 if (addr) {
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 582 newaddr = slice_find_area(mm, len, &good_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 583 psize, topdown, high_limit);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 584 if (newaddr != -ENOMEM) {
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 585 slice_dbg(" found area at 0x%lx\n", newaddr);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 586 goto return_addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 587 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 588 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 589
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 590 /* Now let's see if we can find something in the existing slices
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 591 * for that size plus free slices
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 592 */
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 593 newaddr = slice_find_area(mm, len, &potential_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 594 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 595
203a1fa6286671 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 596 if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && newaddr == -ENOMEM &&
203a1fa6286671 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 597 psize == MMU_PAGE_64K) {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 598 /* retry the search with 4k-page slices included */
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 599 slice_or_mask(&potential_mask, &potential_mask, compat_maskp);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 600 newaddr = slice_find_area(mm, len, &potential_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 601 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 602 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 603
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 604 if (newaddr == -ENOMEM)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 605 return -ENOMEM;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 606
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 607 slice_range_to_mask(newaddr, len, &potential_mask);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 608 slice_dbg(" found potential area at 0x%lx\n", newaddr);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 609 slice_print_mask(" mask", &potential_mask);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 610
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 611 convert:
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 612 /*
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 613 * Try to allocate the context before we do slice convert
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 614 * so that we handle the context allocation failure gracefully.
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 615 */
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 616 if (need_extra_context(mm, newaddr)) {
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 617 if (alloc_extended_context(mm, newaddr) < 0)
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 618 return -ENOMEM;
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 619 }
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 620
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 621 slice_andnot_mask(&potential_mask, &potential_mask, &good_mask);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 622 if (compat_maskp && !fixed)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 623 slice_andnot_mask(&potential_mask, &potential_mask, compat_maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 624 if (potential_mask.low_slices ||
db3a528db41caa arch/powerpc/mm/slice.c Christophe Leroy 2018-02-22 625 (SLICE_NUM_HIGH &&
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 626 !bitmap_empty(potential_mask.high_slices, SLICE_NUM_HIGH))) {
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 627 slice_convert(mm, &potential_mask, psize);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 628 if (psize > MMU_PAGE_BASE)
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 629 on_each_cpu(slice_flush_segments, mm, 1);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 630 }
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 631 return newaddr;
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 632
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 633 return_addr:
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 634 if (need_extra_context(mm, newaddr)) {
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 635 if (alloc_extended_context(mm, newaddr) < 0)
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 636 return -ENOMEM;
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 637 }
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 638 return newaddr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 @639 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 640 EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 641
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 33024 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-22 14:48 ` kernel test robot
@ 2021-11-24 12:10 ` Christophe Leroy
2021-11-24 13:49 ` Christophe Leroy
0 siblings, 1 reply; 22+ messages in thread
From: Christophe Leroy @ 2021-11-24 12:10 UTC (permalink / raw)
To: kernel test robot, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: kbuild-all, linux-mm, linuxppc-dev, linux-kernel
Le 22/11/2021 à 15:48, kernel test robot a écrit :
> Hi Christophe,
>
> I love your patch! Perhaps something to improve:
>
> [auto build test WARNING on powerpc/next]
> [also build test WARNING on hnaz-mm/master linus/master v5.16-rc2 next-20211118]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
> base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc64-randconfig-s031-20211122 (attached as .config)
> compiler: powerpc64-linux-gcc (GCC) 11.2.0
> reproduce:
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # apt-get install sparse
> # sparse version: v0.6.4-dirty
> # https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
> git remote add linux-review https://github.com/0day-ci/linux
> git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
> git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=powerpc64
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All warnings (new ones prefixed by >>):
>
> arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area':
>>> arch/powerpc/mm/book3s64/slice.c:639:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
> 639 | }
> | ^
The problem was already existing when slice.c was in arch/powerpc/mm/
This patch doesn't introduce the problem.
Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-24 12:10 ` Christophe Leroy
@ 2021-11-24 13:49 ` Christophe Leroy
2021-11-26 5:15 ` [kbuild-all] " Chen, Rong A
0 siblings, 1 reply; 22+ messages in thread
From: Christophe Leroy @ 2021-11-24 13:49 UTC (permalink / raw)
To: kernel test robot, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: linux-mm, kbuild-all, linuxppc-dev, linux-kernel
Le 24/11/2021 à 13:10, Christophe Leroy a écrit :
>
>
> Le 22/11/2021 à 15:48, kernel test robot a écrit :
>> Hi Christophe,
>>
>> I love your patch! Perhaps something to improve:
>>
>> [auto build test WARNING on powerpc/next]
>> [also build test WARNING on hnaz-mm/master linus/master v5.16-rc2
>> next-20211118]
>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>> And when submitting patch, we suggest to use '--base' as documented in
>> https://git-scm.com/docs/git-format-patch]
>>
>> url:
>> https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
>>
>> base:
>> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
>> config: powerpc64-randconfig-s031-20211122 (attached as .config)
>> compiler: powerpc64-linux-gcc (GCC) 11.2.0
>> reproduce:
>> wget
>> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross
>> -O ~/bin/make.cross
>> chmod +x ~/bin/make.cross
>> # apt-get install sparse
>> # sparse version: v0.6.4-dirty
>> #
>> https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
>>
>> git remote add linux-review https://github.com/0day-ci/linux
>> git fetch --no-tags linux-review
>> Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
>>
>> git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
>> # save the attached .config to linux build tree
>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0
>> make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=powerpc64
>>
>> If you fix the issue, kindly add following tag as appropriate
>> Reported-by: kernel test robot <lkp@intel.com>
>>
>> All warnings (new ones prefixed by >>):
>>
>> arch/powerpc/mm/book3s64/slice.c: In function
>> 'slice_get_unmapped_area':
>>>> arch/powerpc/mm/book3s64/slice.c:639:1: warning: the frame size of
>>>> 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>> 639 | }
>> | ^
>
>
> The problem was already existing when slice.c was in arch/powerpc/mm/
>
> This patch doesn't introduce the problem.
>
In fact the problem is really added by yourself mister 'kernel test robot'.
CONFIG_FRAME_WARN is supposed to be 2048 on 64 bit architectures.
It the robot starts to reduce that value, it is on its own ....
config FRAME_WARN
int "Warn for stack frames larger than"
range 0 8192
default 2048 if GCC_PLUGIN_LATENT_ENTROPY
default 1536 if (!64BIT && (PARISC || XTENSA))
default 1024 if (!64BIT && !PARISC)
default 2048 if 64BIT
help
Tell gcc to warn at build time for stack frames larger than this.
Setting this too low will cause a lot of warnings.
Setting it to 0 disables the warning.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [kbuild-all] Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-24 13:49 ` Christophe Leroy
@ 2021-11-26 5:15 ` Chen, Rong A
0 siblings, 0 replies; 22+ messages in thread
From: Chen, Rong A @ 2021-11-26 5:15 UTC (permalink / raw)
To: Christophe Leroy, kernel test robot, Benjamin Herrenschmidt,
Paul Mackerras, Michael Ellerman, alex
Cc: linux-mm, kbuild-all, linuxppc-dev, linux-kernel
On 11/24/2021 9:49 PM, Christophe Leroy wrote:
>
>
> Le 24/11/2021 à 13:10, Christophe Leroy a écrit :
>>
>>
>> Le 22/11/2021 à 15:48, kernel test robot a écrit :
>>> Hi Christophe,
>>>
>>> I love your patch! Perhaps something to improve:
>>>
>>> [auto build test WARNING on powerpc/next]
>>> [also build test WARNING on hnaz-mm/master linus/master v5.16-rc2
>>> next-20211118]
>>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>>> And when submitting patch, we suggest to use '--base' as documented in
>>> https://git-scm.com/docs/git-format-patch]
>>>
>>> url:
>>> https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
>>>
>>> base:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
>>> config: powerpc64-randconfig-s031-20211122 (attached as .config)
>>> compiler: powerpc64-linux-gcc (GCC) 11.2.0
>>> reproduce:
>>> wget
>>> https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross
>>> -O ~/bin/make.cross
>>> chmod +x ~/bin/make.cross
>>> # apt-get install sparse
>>> # sparse version: v0.6.4-dirty
>>> #
>>> https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
>>>
>>> git remote add linux-review https://github.com/0day-ci/linux
>>> git fetch --no-tags linux-review
>>> Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
>>>
>>> git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
>>> # save the attached .config to linux build tree
>>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0
>>> make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
>>> ARCH=powerpc64
>>>
>>> If you fix the issue, kindly add following tag as appropriate
>>> Reported-by: kernel test robot <lkp@intel.com>
>>>
>>> All warnings (new ones prefixed by >>):
>>>
>>> arch/powerpc/mm/book3s64/slice.c: In function
>>> 'slice_get_unmapped_area':
>>>>> arch/powerpc/mm/book3s64/slice.c:639:1: warning: the frame size of
>>>>> 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
>>> 639 | }
>>> | ^
>>
>>
>> The problem was already existing when slice.c was in arch/powerpc/mm/
>>
>> This patch doesn't introduce the problem.
>>
>
> In fact the problem is really added by yourself mister 'kernel test robot'.
>
> CONFIG_FRAME_WARN is supposed to be 2048 on 64 bit architectures.
>
> It the robot starts to reduce that value, it is on its own ....
Hi Christophe,
Thanks for the information, we'll set the default value for FRAME_WARN
in randconfig tests.
Best Regards,
Rong Chen
>
>
> config FRAME_WARN
> int "Warn for stack frames larger than"
> range 0 8192
> default 2048 if GCC_PLUGIN_LATENT_ENTROPY
> default 1536 if (!64BIT && (PARISC || XTENSA))
> default 1024 if (!64BIT && !PARISC)
> default 2048 if 64BIT
> help
> Tell gcc to warn at build time for stack frames larger than this.
> Setting this too low will cause a lot of warnings.
> Setting it to 0 disables the warning.
> _______________________________________________
> kbuild-all mailing list -- kbuild-all@lists.01.org
> To unsubscribe send an email to kbuild-all-leave@lists.01.org
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-22 8:48 ` [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64 Christophe Leroy
2021-11-22 14:48 ` kernel test robot
@ 2021-11-22 21:10 ` kernel test robot
2021-11-24 12:10 ` Christophe Leroy
1 sibling, 1 reply; 22+ messages in thread
From: kernel test robot @ 2021-11-22 21:10 UTC (permalink / raw)
To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: kbuild-all, linux-mm, linuxppc-dev, linux-kernel
[-- Attachment #1: Type: text/plain, Size: 25175 bytes --]
Hi Christophe,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on hnaz-mm/master linus/master v5.16-rc2 next-20211118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc64-randconfig-r021-20211122 (attached as .config)
compiler: powerpc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area':
>> arch/powerpc/mm/book3s64/slice.c:639:1: error: the frame size of 1056 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
639 | }
| ^
cc1: all warnings being treated as errors
vim +639 arch/powerpc/mm/book3s64/slice.c
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 428
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 429 unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 430 unsigned long flags, unsigned int psize,
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 431 int topdown)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 432 {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 433 struct slice_mask good_mask;
f3207c124e7aa8 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-22 434 struct slice_mask potential_mask;
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 435 const struct slice_mask *maskp;
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 436 const struct slice_mask *compat_maskp = NULL;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 437 int fixed = (flags & MAP_FIXED);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 438 int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 439 unsigned long page_size = 1UL << pshift;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 440 struct mm_struct *mm = current->mm;
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 441 unsigned long newaddr;
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 442 unsigned long high_limit;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 443
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 444 high_limit = DEFAULT_MAP_WINDOW;
35602f82d0c765 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 445 if (addr >= high_limit || (fixed && (addr + len > high_limit)))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 446 high_limit = TASK_SIZE;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 447
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 448 if (len > high_limit)
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 449 return -ENOMEM;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 450 if (len & (page_size - 1))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 451 return -EINVAL;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 452 if (fixed) {
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 453 if (addr & (page_size - 1))
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 454 return -EINVAL;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 455 if (addr > high_limit - len)
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 456 return -ENOMEM;
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 457 }
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 458
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 459 if (high_limit > mm_ctx_slb_addr_limit(&mm->context)) {
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 460 /*
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 461 * Increasing the slb_addr_limit does not require
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 462 * slice mask cache to be recalculated because it should
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 463 * be already initialised beyond the old address limit.
5709f7cfd83052 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 464 */
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 465 mm_ctx_set_slb_addr_limit(&mm->context, high_limit);
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 466
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 467 on_each_cpu(slice_flush_segments, mm, 1);
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 468 }
6a72dc038b6152 arch/powerpc/mm/slice.c Nicholas Piggin 2017-11-10 469
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 470 /* Sanity checks */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 471 BUG_ON(mm->task_size == 0);
60458fba469a69 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-04-17 472 BUG_ON(mm_ctx_slb_addr_limit(&mm->context) == 0);
764041e0f43cc7 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2016-04-29 473 VM_BUG_ON(radix_enabled());
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 474
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 475 slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 476 slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
34d07177b802e9 arch/powerpc/mm/slice.c Michel Lespinasse 2013-04-29 477 addr, len, flags, topdown);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 478
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 479 /* If hint, make sure it matches our alignment restrictions */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 480 if (!fixed && addr) {
b711531641038f arch/powerpc/mm/slice.c Christophe Leroy 2020-04-20 481 addr = ALIGN(addr, page_size);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 482 slice_dbg(" aligned addr=%lx\n", addr);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 483 /* Ignore hint if it's too large or overlaps a VMA */
3b4d07d2674f6b arch/powerpc/mm/slice.c Aneesh Kumar K.V 2019-02-26 484 if (addr > high_limit - len || addr < mmap_min_addr ||
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 485 !slice_area_is_free(mm, addr, len))
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 486 addr = 0;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 487 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 488
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 489 /* First make up a "good" mask of slices that have the right size
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 490 * already
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 491 */
6f60cc98df2be7 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 492 maskp = slice_mask_for_size(&mm->context, psize);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 493
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 494 /*
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 495 * Here "good" means slices that are already the right page size,
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 496 * "compat" means slices that have a compatible page size (i.e.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 497 * 4k in a 64k pagesize kernel), and "free" means slices without
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 498 * any VMAs.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 499 *
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 500 * If MAP_FIXED:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 501 * check if fits in good | compat => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 502 * check if fits in good | compat | free => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 503 * else bad
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 504 * If have hint:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 505 * check if hint fits in good => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 506 * check if hint fits in good | free => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 507 * Otherwise:
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 508 * search in good, found => OK
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 509 * search in good | free, found => convert free
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 510 * search in good | compat | free, found => convert free.
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 511 */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 512
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 513 /*
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 514 * If we support combo pages, we can allow 64k pages in 4k slices
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 515 * The mask copies could be avoided in most cases here if we had
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 516 * a pointer to good mask for the next code to use.
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 517 */
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 518 if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
6f60cc98df2be7 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 519 compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 520 if (fixed)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 521 slice_or_mask(&good_mask, maskp, compat_maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 522 else
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 523 slice_copy_mask(&good_mask, maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 524 } else {
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 525 slice_copy_mask(&good_mask, maskp);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 526 }
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 527
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 528 slice_print_mask(" good_mask", &good_mask);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 529 if (compat_maskp)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 530 slice_print_mask(" compat_mask", compat_maskp);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 531
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 532 /* First check hint if it's valid or if we have MAP_FIXED */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 533 if (addr != 0 || fixed) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 534 /* Check if we fit in the good mask. If we do, we just return,
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 535 * nothing else to do
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 536 */
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 537 if (slice_check_range_fits(mm, &good_mask, addr, len)) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 538 slice_dbg(" fits good !\n");
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 539 newaddr = addr;
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 540 goto return_addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 541 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 542 } else {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 543 /* Now let's see if we can find something in the existing
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 544 * slices for that size
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 545 */
830fd2d45aa116 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 546 newaddr = slice_find_area(mm, len, &good_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 547 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 548 if (newaddr != -ENOMEM) {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 549 /* Found within the good mask, we don't have to setup,
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 550 * we thus return directly
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 551 */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 552 slice_dbg(" found area at 0x%lx\n", newaddr);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 553 goto return_addr;
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 554 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 555 }
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 556 /*
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 557 * We don't fit in the good mask, check what other slices are
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 558 * empty and thus can be converted
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 559 */
7a06c66835f75f arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-11-10 560 slice_mask_for_free(mm, &potential_mask, high_limit);
b8c93549142077 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 561 slice_or_mask(&potential_mask, &potential_mask, &good_mask);
830fd2d45aa116 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 562 slice_print_mask(" potential", &potential_mask);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 563
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 564 if (addr != 0 || fixed) {
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 565 if (slice_check_range_fits(mm, &potential_mask, addr, len)) {
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 566 slice_dbg(" fits potential !\n");
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 567 newaddr = addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 568 goto convert;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 569 }
ae3066bd1cbe58 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 570 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 571
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 572 /* If we have MAP_FIXED and failed the above steps, then error out */
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 573 if (fixed)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 574 return -EBUSY;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 575
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 576 slice_dbg(" search...\n");
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 577
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 578 /* If we had a hint that didn't work out, see if we can fit
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 579 * anywhere in the good area.
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 580 */
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 581 if (addr) {
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 582 newaddr = slice_find_area(mm, len, &good_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 583 psize, topdown, high_limit);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 584 if (newaddr != -ENOMEM) {
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 585 slice_dbg(" found area at 0x%lx\n", newaddr);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 586 goto return_addr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 587 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 588 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 589
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 590 /* Now let's see if we can find something in the existing slices
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 591 * for that size plus free slices
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 592 */
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 593 newaddr = slice_find_area(mm, len, &potential_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 594 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 595
203a1fa6286671 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 596 if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && newaddr == -ENOMEM &&
203a1fa6286671 arch/powerpc/mm/slice.c Christophe Leroy 2019-04-25 597 psize == MMU_PAGE_64K) {
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 598 /* retry the search with 4k-page slices included */
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 599 slice_or_mask(&potential_mask, &potential_mask, compat_maskp);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 600 newaddr = slice_find_area(mm, len, &potential_mask,
f4ea6dcb08ea2c arch/powerpc/mm/slice.c Aneesh Kumar K.V 2017-03-30 601 psize, topdown, high_limit);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 602 }
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 603
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 604 if (newaddr == -ENOMEM)
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 605 return -ENOMEM;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 606
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 607 slice_range_to_mask(newaddr, len, &potential_mask);
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 608 slice_dbg(" found potential area at 0x%lx\n", newaddr);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 609 slice_print_mask(" mask", &potential_mask);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 610
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 611 convert:
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 612 /*
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 613 * Try to allocate the context before we do slice convert
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 614 * so that we handle the context allocation failure gracefully.
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 615 */
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 616 if (need_extra_context(mm, newaddr)) {
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 617 if (alloc_extended_context(mm, newaddr) < 0)
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 618 return -ENOMEM;
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 619 }
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 620
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 621 slice_andnot_mask(&potential_mask, &potential_mask, &good_mask);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 622 if (compat_maskp && !fixed)
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 623 slice_andnot_mask(&potential_mask, &potential_mask, compat_maskp);
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 624 if (potential_mask.low_slices ||
db3a528db41caa arch/powerpc/mm/slice.c Christophe Leroy 2018-02-22 625 (SLICE_NUM_HIGH &&
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 626 !bitmap_empty(potential_mask.high_slices, SLICE_NUM_HIGH))) {
d262bd5a739982 arch/powerpc/mm/slice.c Nicholas Piggin 2018-03-07 627 slice_convert(mm, &potential_mask, psize);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 628 if (psize > MMU_PAGE_BASE)
54be0b9c7c9888 arch/powerpc/mm/slice.c Michael Ellerman 2018-10-02 629 on_each_cpu(slice_flush_segments, mm, 1);
3a8247cc2c8569 arch/powerpc/mm/slice.c Paul Mackerras 2008-06-18 630 }
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 631 return newaddr;
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 632
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 633 return_addr:
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 634 if (need_extra_context(mm, newaddr)) {
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 635 if (alloc_extended_context(mm, newaddr) < 0)
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 636 return -ENOMEM;
f384796c40dc55 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 637 }
0dea04b288c066 arch/powerpc/mm/slice.c Aneesh Kumar K.V 2018-03-26 638 return newaddr;
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 @639 }
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 640 EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
d0f13e3c20b6fb arch/powerpc/mm/slice.c Benjamin Herrenschmidt 2007-05-08 641
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 43404 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64
2021-11-22 21:10 ` kernel test robot
@ 2021-11-24 12:10 ` Christophe Leroy
0 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-24 12:10 UTC (permalink / raw)
To: kernel test robot, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: kbuild-all, linux-mm, linuxppc-dev, linux-kernel
Le 22/11/2021 à 22:10, kernel test robot a écrit :
> Hi Christophe,
>
> I love your patch! Yet something to improve:
>
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on hnaz-mm/master linus/master v5.16-rc2 next-20211118]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
> base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc64-randconfig-r021-20211122 (attached as .config)
> compiler: powerpc64-linux-gcc (GCC) 11.2.0
> reproduce (this is a W=1 build):
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # https://github.com/0day-ci/linux/commit/1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
> git remote add linux-review https://github.com/0day-ci/linux
> git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
> git checkout 1d0b7cc86d08f25f595b52d8c39ba9ca1d29a30a
> # save the attached .config to linux build tree
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=powerpc
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
> arch/powerpc/mm/book3s64/slice.c: In function 'slice_get_unmapped_area':
>>> arch/powerpc/mm/book3s64/slice.c:639:1: error: the frame size of 1056 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
> 639 | }
> | ^
> cc1: all warnings being treated as errors
>
>
The problem was already existing when slice.c was in arch/powerpc/mm/
This patch doesn't introduce the problem.
Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 2/8] powerpc/mm: Remove CONFIG_PPC_MM_SLICES
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
2021-11-22 8:48 ` [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64 Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 8:48 ` [PATCH 3/8] powerpc/mm: Remove asm/slice.h Christophe Leroy
` (6 subsequent siblings)
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
CONFIG_PPC_MM_SLICES is always selected by book3s/64.
CONFIG_PPC_MM_SLICES is never selected by other platforms.
Remove it.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/book3s/64/hash.h | 2 --
arch/powerpc/include/asm/hugetlb.h | 2 +-
arch/powerpc/include/asm/paca.h | 5 -----
arch/powerpc/include/asm/slice.h | 13 ++-----------
arch/powerpc/kernel/paca.c | 5 -----
arch/powerpc/mm/book3s64/Makefile | 3 +--
arch/powerpc/mm/book3s64/hash_utils.c | 14 --------------
arch/powerpc/mm/hugetlbpage.c | 4 ++--
arch/powerpc/platforms/Kconfig.cputype | 4 ----
9 files changed, 6 insertions(+), 46 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 674fe0e890dc..25f8e90985eb 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -99,10 +99,8 @@
* Defines the address of the vmemap area, in its own region on
* hash table CPUs.
*/
-#ifdef CONFIG_PPC_MM_SLICES
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
-#endif /* CONFIG_PPC_MM_SLICES */
/* PTEIDX nibble */
#define _PTEIDX_SECONDARY 0x8
diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
index f18c543bc01d..83f067d4d2f3 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -24,7 +24,7 @@ static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
- if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled())
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled())
return slice_is_hugepage_only_range(mm, addr, len);
return 0;
}
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index dc05a862e72a..20bef2e8533b 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -149,13 +149,8 @@ struct paca_struct {
#endif /* CONFIG_PPC_BOOK3E */
#ifdef CONFIG_PPC_BOOK3S
-#ifdef CONFIG_PPC_MM_SLICES
unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
-#else
- u16 mm_ctx_user_psize;
- u16 mm_ctx_sllp;
-#endif
#endif
/*
diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
index 0bdd9c62eca0..be4acc52e8ec 100644
--- a/arch/powerpc/include/asm/slice.h
+++ b/arch/powerpc/include/asm/slice.h
@@ -10,7 +10,7 @@
struct mm_struct;
-#ifdef CONFIG_PPC_MM_SLICES
+#ifdef CONFIG_PPC_BOOK3S_64
#ifdef CONFIG_HUGETLB_PAGE
#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
@@ -30,16 +30,7 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
void slice_init_new_context_exec(struct mm_struct *mm);
void slice_setup_new_exec(void);
-#else /* CONFIG_PPC_MM_SLICES */
-
-static inline void slice_init_new_context_exec(struct mm_struct *mm) {}
-
-static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
-{
- return 0;
-}
-
-#endif /* CONFIG_PPC_MM_SLICES */
+#endif /* CONFIG_PPC_BOOK3S_64 */
#endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 4208b4044d12..a61f6fdcfb00 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -346,16 +346,11 @@ void copy_mm_to_paca(struct mm_struct *mm)
#ifdef CONFIG_PPC_BOOK3S
mm_context_t *context = &mm->context;
-#ifdef CONFIG_PPC_MM_SLICES
VM_BUG_ON(!mm_ctx_slb_addr_limit(context));
memcpy(&get_paca()->mm_ctx_low_slices_psize, mm_ctx_low_slices(context),
LOW_SLICE_ARRAY_SZ);
memcpy(&get_paca()->mm_ctx_high_slices_psize, mm_ctx_high_slices(context),
TASK_SLICE_ARRAY_SZ(context));
-#else /* CONFIG_PPC_MM_SLICES */
- get_paca()->mm_ctx_user_psize = context->user_psize;
- get_paca()->mm_ctx_sllp = context->sllp;
-#endif
#else /* !CONFIG_PPC_BOOK3S */
return;
#endif
diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 30951668c684..f8562c79c59f 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -4,7 +4,7 @@ ccflags-y := $(NO_MINIMAL_TOC)
CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
-obj-y += hash_pgtable.o hash_utils.o slb.o \
+obj-y += hash_pgtable.o hash_utils.o slb.o slice.o \
mmu_context.o pgtable.o hash_tlb.o
obj-$(CONFIG_PPC_NATIVE) += hash_native.o
obj-$(CONFIG_PPC_RADIX_MMU) += radix_pgtable.o radix_tlb.o
@@ -18,7 +18,6 @@ obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
obj-$(CONFIG_PPC_SUBPAGE_PROT) += subpage_prot.o
obj-$(CONFIG_SPAPR_TCE_IOMMU) += iommu_api.o
obj-$(CONFIG_PPC_PKEY) += pkeys.o
-obj-$(CONFIG_PPC_MM_SLICES) += slice.o
# Instrumenting the SLB fault path can lead to duplicate SLB entries
KCOV_INSTRUMENT_slb.o := n
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index cfd45245d009..1d09d4aeddbf 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1165,7 +1165,6 @@ unsigned int hash_page_do_lazy_icache(unsigned int pp, pte_t pte, int trap)
return pp;
}
-#ifdef CONFIG_PPC_MM_SLICES
static unsigned int get_paca_psize(unsigned long addr)
{
unsigned char *psizes;
@@ -1182,12 +1181,6 @@ static unsigned int get_paca_psize(unsigned long addr)
return (psizes[index >> 1] >> (mask_index * 4)) & 0xF;
}
-#else
-unsigned int get_paca_psize(unsigned long addr)
-{
- return get_paca()->mm_ctx_user_psize;
-}
-#endif
/*
* Demote a segment to using 4k pages.
@@ -1611,7 +1604,6 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
return 0;
}
-#ifdef CONFIG_PPC_MM_SLICES
static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
{
int psize = get_slice_psize(mm, ea);
@@ -1628,12 +1620,6 @@ static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
return true;
}
-#else
-static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
-{
- return true;
-}
-#endif
static void hash_preload(struct mm_struct *mm, pte_t *ptep, unsigned long ea,
bool is_exec, unsigned long trap)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 82d8b368ca6d..10c3b2b8e9d8 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -542,7 +542,7 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
return page;
}
-#ifdef CONFIG_PPC_MM_SLICES
+#ifdef CONFIG_PPC_BOOK3S_64
unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
@@ -562,7 +562,7 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
{
/* With radix we don't use slice, so derive it from vma*/
- if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) {
+ if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) {
unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
return 1UL << mmu_psize_to_shift(psize);
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index a208997ade88..580339c0c5bc 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -105,7 +105,6 @@ config PPC_BOOK3S_64
select HAVE_MOVE_PMD
select HAVE_MOVE_PUD
select IRQ_WORK
- select PPC_MM_SLICES
select PPC_HAVE_KUEP
select PPC_HAVE_KUAP
@@ -432,9 +431,6 @@ config PPC_BOOK3E_MMU
def_bool y
depends on FSL_BOOKE || PPC_BOOK3E
-config PPC_MM_SLICES
- bool
-
config PPC_HAVE_PMU_SUPPORT
bool
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 3/8] powerpc/mm: Remove asm/slice.h
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
2021-11-22 8:48 ` [PATCH 1/8] powerpc/mm: Make slice specific to book3s/64 Christophe Leroy
2021-11-22 8:48 ` [PATCH 2/8] powerpc/mm: Remove CONFIG_PPC_MM_SLICES Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 8:48 ` [PATCH 4/8] powerpc/mm: Move vma_mmu_pagesize() and hugetlb_get_unmapped_area() to slice.c Christophe Leroy
` (5 subsequent siblings)
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Move necessary stuff in asm/book3s/64/slice.h and
remove asm/slice.h
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/include/asm/book3s/64/hash.h | 3 ++
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/include/asm/book3s/64/slice.h | 18 +++++++++
arch/powerpc/include/asm/page.h | 1 -
arch/powerpc/include/asm/slice.h | 37 -------------------
5 files changed, 22 insertions(+), 38 deletions(-)
delete mode 100644 arch/powerpc/include/asm/slice.h
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 25f8e90985eb..27be22e6f848 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -99,6 +99,9 @@
* Defines the address of the vmemap area, in its own region on
* hash table CPUs.
*/
+#ifdef CONFIG_HUGETLB_PAGE
+#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
+#endif
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 3004f3323144..b4b2ca111f75 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -18,6 +18,7 @@
* complete pgtable.h but only a portion of it.
*/
#include <asm/book3s/64/pgtable.h>
+#include <asm/book3s/64/slice.h>
#include <asm/task_size_64.h>
#include <asm/cpu_has_feature.h>
diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h
index f0d3194ba41b..5b0f7105bc8b 100644
--- a/arch/powerpc/include/asm/book3s/64/slice.h
+++ b/arch/powerpc/include/asm/book3s/64/slice.h
@@ -2,6 +2,8 @@
#ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H
#define _ASM_POWERPC_BOOK3S_64_SLICE_H
+#ifndef __ASSEMBLY__
+
#define SLICE_LOW_SHIFT 28
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
@@ -13,4 +15,20 @@
#define SLB_ADDR_LIMIT_DEFAULT DEFAULT_MAP_WINDOW_USER64
+struct mm_struct;
+
+unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
+ unsigned long flags, unsigned int psize,
+ int topdown);
+
+unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
+
+void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
+ unsigned long len, unsigned int psize);
+
+void slice_init_new_context_exec(struct mm_struct *mm);
+void slice_setup_new_exec(void);
+
+#endif /* __ASSEMBLY__ */
+
#endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 254687258f42..62e0c6f12869 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -329,6 +329,5 @@ static inline unsigned long kaslr_offset(void)
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
-#include <asm/slice.h>
#endif /* _ASM_POWERPC_PAGE_H */
diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
deleted file mode 100644
index be4acc52e8ec..000000000000
--- a/arch/powerpc/include/asm/slice.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_POWERPC_SLICE_H
-#define _ASM_POWERPC_SLICE_H
-
-#ifdef CONFIG_PPC_BOOK3S_64
-#include <asm/book3s/64/slice.h>
-#endif
-
-#ifndef __ASSEMBLY__
-
-struct mm_struct;
-
-#ifdef CONFIG_PPC_BOOK3S_64
-
-#ifdef CONFIG_HUGETLB_PAGE
-#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
-#endif
-#define HAVE_ARCH_UNMAPPED_AREA
-#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
-
-unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
- unsigned long flags, unsigned int psize,
- int topdown);
-
-unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr);
-
-void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
- unsigned long len, unsigned int psize);
-
-void slice_init_new_context_exec(struct mm_struct *mm);
-void slice_setup_new_exec(void);
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
-
-#endif /* __ASSEMBLY__ */
-
-#endif /* _ASM_POWERPC_SLICE_H */
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 4/8] powerpc/mm: Move vma_mmu_pagesize() and hugetlb_get_unmapped_area() to slice.c
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (2 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 3/8] powerpc/mm: Remove asm/slice.h Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 8:48 ` [PATCH 5/8] powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area() Christophe Leroy
` (4 subsequent siblings)
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
vma_mmu_pagesize() is only required for slices,
otherwise there is a generic weak version.
hugetlb_get_unmapped_area() is dedicated to slices.
Move them to slice.c
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/mm/book3s64/slice.c | 22 ++++++++++++++++++++++
arch/powerpc/mm/hugetlbpage.c | 28 ----------------------------
2 files changed, 22 insertions(+), 28 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
index 82b45b1cb973..62848c5fa2d6 100644
--- a/arch/powerpc/mm/book3s64/slice.c
+++ b/arch/powerpc/mm/book3s64/slice.c
@@ -779,4 +779,26 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
return !slice_check_range_fits(mm, maskp, addr, len);
}
+
+unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
+{
+ /* With radix we don't use slice, so derive it from vma*/
+ if (radix_enabled())
+ return vma_kernel_pagesize(vma);
+
+ return 1UL << mmu_psize_to_shift(get_slice_psize(vma->vm_mm, vma->vm_start));
+}
+
+unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ struct hstate *hstate = hstate_file(file);
+ int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
+
+ if (radix_enabled())
+ return radix__hugetlb_get_unmapped_area(file, addr, len, pgoff, flags);
+
+ return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
+}
#endif
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 10c3b2b8e9d8..eb9de09e49a3 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -542,34 +542,6 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
return page;
}
-#ifdef CONFIG_PPC_BOOK3S_64
-unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
- unsigned long len, unsigned long pgoff,
- unsigned long flags)
-{
- struct hstate *hstate = hstate_file(file);
- int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
-
-#ifdef CONFIG_PPC_RADIX_MMU
- if (radix_enabled())
- return radix__hugetlb_get_unmapped_area(file, addr, len,
- pgoff, flags);
-#endif
- return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
-}
-#endif
-
-unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
-{
- /* With radix we don't use slice, so derive it from vma*/
- if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !radix_enabled()) {
- unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
-
- return 1UL << mmu_psize_to_shift(psize);
- }
- return vma_kernel_pagesize(vma);
-}
-
bool __init arch_hugetlb_valid_size(unsigned long size)
{
int shift = __ffs(size);
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 5/8] powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area()
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (3 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 4/8] powerpc/mm: Move vma_mmu_pagesize() and hugetlb_get_unmapped_area() to slice.c Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 8:48 ` [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT Christophe Leroy
` (3 subsequent siblings)
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Instead of setting mm->get_unmapped_area() to either
arch_get_unmapped_area() or radix__arch_get_unmapped_area(),
always set it to arch_get_unmapped_area() and call
radix__arch_get_unmapped_area() from there when radix is enabled.
To keep radix__arch_get_unmapped_area() static, move it to slice.c
Do the same with radix__arch_get_unmapped_area_topdown()
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/mm/book3s64/slice.c | 104 ++++++++++++++++++++++++++
arch/powerpc/mm/mmap.c | 123 -------------------------------
2 files changed, 104 insertions(+), 123 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
index 62848c5fa2d6..8327a43d29cb 100644
--- a/arch/powerpc/mm/book3s64/slice.c
+++ b/arch/powerpc/mm/book3s64/slice.c
@@ -639,12 +639,113 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
}
EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
+/*
+ * Same function as generic code used only for radix, because we don't need to overload
+ * the generic one. But we will have to duplicate, because hash select
+ * HAVE_ARCH_UNMAPPED_AREA
+ */
+static unsigned long
+radix__arch_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len,
+ unsigned long pgoff, unsigned long flags)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ int fixed = (flags & MAP_FIXED);
+ unsigned long high_limit;
+ struct vm_unmapped_area_info info;
+
+ high_limit = DEFAULT_MAP_WINDOW;
+ if (addr >= high_limit || (fixed && (addr + len > high_limit)))
+ high_limit = TASK_SIZE;
+
+ if (len > high_limit)
+ return -ENOMEM;
+
+ if (fixed) {
+ if (addr > high_limit - len)
+ return -ENOMEM;
+ return addr;
+ }
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (high_limit - len >= addr && addr >= mmap_min_addr &&
+ (!vma || addr + len <= vm_start_gap(vma)))
+ return addr;
+ }
+
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = mm->mmap_base;
+ info.high_limit = high_limit;
+ info.align_mask = 0;
+
+ return vm_unmapped_area(&info);
+}
+
+static unsigned long
+radix__arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
+ const unsigned long len, const unsigned long pgoff,
+ const unsigned long flags)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ unsigned long addr = addr0;
+ int fixed = (flags & MAP_FIXED);
+ unsigned long high_limit;
+ struct vm_unmapped_area_info info;
+
+ high_limit = DEFAULT_MAP_WINDOW;
+ if (addr >= high_limit || (fixed && (addr + len > high_limit)))
+ high_limit = TASK_SIZE;
+
+ if (len > high_limit)
+ return -ENOMEM;
+
+ if (fixed) {
+ if (addr > high_limit - len)
+ return -ENOMEM;
+ return addr;
+ }
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (high_limit - len >= addr && addr >= mmap_min_addr &&
+ (!vma || addr + len <= vm_start_gap(vma)))
+ return addr;
+ }
+
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+ info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
+ info.align_mask = 0;
+
+ addr = vm_unmapped_area(&info);
+ if (!(addr & ~PAGE_MASK))
+ return addr;
+ VM_BUG_ON(addr != -ENOMEM);
+
+ /*
+ * A failed mmap() very likely causes application failure,
+ * so fall back to the bottom-up function here. This scenario
+ * can happen with large stack limits and large mmap()
+ * allocations.
+ */
+ return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
+}
+
unsigned long arch_get_unmapped_area(struct file *filp,
unsigned long addr,
unsigned long len,
unsigned long pgoff,
unsigned long flags)
{
+ if (radix_enabled())
+ return radix__arch_get_unmapped_area(filp, addr, len, pgoff, flags);
+
return slice_get_unmapped_area(addr, len, flags,
mm_ctx_user_psize(¤t->mm->context), 0);
}
@@ -655,6 +756,9 @@ unsigned long arch_get_unmapped_area_topdown(struct file *filp,
const unsigned long pgoff,
const unsigned long flags)
{
+ if (radix_enabled())
+ return radix__arch_get_unmapped_area_topdown(filp, addr0, len, pgoff, flags);
+
return slice_get_unmapped_area(addr0, len, flags,
mm_ctx_user_psize(¤t->mm->context), 1);
}
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index ae683fdc716c..5972d619d274 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -80,126 +80,6 @@ static inline unsigned long mmap_base(unsigned long rnd,
return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
}
-#ifdef CONFIG_PPC_RADIX_MMU
-/*
- * Same function as generic code used only for radix, because we don't need to overload
- * the generic one. But we will have to duplicate, because hash select
- * HAVE_ARCH_UNMAPPED_AREA
- */
-static unsigned long
-radix__arch_get_unmapped_area(struct file *filp, unsigned long addr,
- unsigned long len, unsigned long pgoff,
- unsigned long flags)
-{
- struct mm_struct *mm = current->mm;
- struct vm_area_struct *vma;
- int fixed = (flags & MAP_FIXED);
- unsigned long high_limit;
- struct vm_unmapped_area_info info;
-
- high_limit = DEFAULT_MAP_WINDOW;
- if (addr >= high_limit || (fixed && (addr + len > high_limit)))
- high_limit = TASK_SIZE;
-
- if (len > high_limit)
- return -ENOMEM;
-
- if (fixed) {
- if (addr > high_limit - len)
- return -ENOMEM;
- return addr;
- }
-
- if (addr) {
- addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
- if (high_limit - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vm_start_gap(vma)))
- return addr;
- }
-
- info.flags = 0;
- info.length = len;
- info.low_limit = mm->mmap_base;
- info.high_limit = high_limit;
- info.align_mask = 0;
-
- return vm_unmapped_area(&info);
-}
-
-static unsigned long
-radix__arch_get_unmapped_area_topdown(struct file *filp,
- const unsigned long addr0,
- const unsigned long len,
- const unsigned long pgoff,
- const unsigned long flags)
-{
- struct vm_area_struct *vma;
- struct mm_struct *mm = current->mm;
- unsigned long addr = addr0;
- int fixed = (flags & MAP_FIXED);
- unsigned long high_limit;
- struct vm_unmapped_area_info info;
-
- high_limit = DEFAULT_MAP_WINDOW;
- if (addr >= high_limit || (fixed && (addr + len > high_limit)))
- high_limit = TASK_SIZE;
-
- if (len > high_limit)
- return -ENOMEM;
-
- if (fixed) {
- if (addr > high_limit - len)
- return -ENOMEM;
- return addr;
- }
-
- if (addr) {
- addr = PAGE_ALIGN(addr);
- vma = find_vma(mm, addr);
- if (high_limit - len >= addr && addr >= mmap_min_addr &&
- (!vma || addr + len <= vm_start_gap(vma)))
- return addr;
- }
-
- info.flags = VM_UNMAPPED_AREA_TOPDOWN;
- info.length = len;
- info.low_limit = max(PAGE_SIZE, mmap_min_addr);
- info.high_limit = mm->mmap_base + (high_limit - DEFAULT_MAP_WINDOW);
- info.align_mask = 0;
-
- addr = vm_unmapped_area(&info);
- if (!(addr & ~PAGE_MASK))
- return addr;
- VM_BUG_ON(addr != -ENOMEM);
-
- /*
- * A failed mmap() very likely causes application failure,
- * so fall back to the bottom-up function here. This scenario
- * can happen with large stack limits and large mmap()
- * allocations.
- */
- return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
-}
-
-static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
- unsigned long random_factor,
- struct rlimit *rlim_stack)
-{
- if (mmap_is_legacy(rlim_stack)) {
- mm->mmap_base = TASK_UNMAPPED_BASE;
- mm->get_unmapped_area = radix__arch_get_unmapped_area;
- } else {
- mm->mmap_base = mmap_base(random_factor, rlim_stack);
- mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown;
- }
-}
-#else
-/* dummy */
-extern void radix__arch_pick_mmap_layout(struct mm_struct *mm,
- unsigned long random_factor,
- struct rlimit *rlim_stack);
-#endif
/*
* This function, called very early during the creation of a new
* process VM image, sets up which VM layout function to use:
@@ -211,9 +91,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
if (current->flags & PF_RANDOMIZE)
random_factor = arch_mmap_rnd();
- if (radix_enabled())
- return radix__arch_pick_mmap_layout(mm, random_factor,
- rlim_stack);
/*
* Fall back to the standard layout if the personality
* bit is set, or if the expected stack growth is unlimited:
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (4 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 5/8] powerpc/mm: Call radix__arch_get_unmapped_area() from arch_get_unmapped_area() Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 11:22 ` Alex Ghiti
2021-11-23 0:22 ` kernel test robot
2021-11-22 8:48 ` [PATCH 7/8] powerpc/mm: Convert to default topdown mmap layout Christophe Leroy
` (2 subsequent siblings)
8 siblings, 2 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Commit e7142bf5d231 ("arm64, mm: make randomization selected by
generic topdown mmap layout") introduced a default version of
arch_randomize_brk() provided when
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected.
powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
but needs to provide its own arch_randomize_brk().
In order to allow that, don't make
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select
CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that
selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect.
Then only provide the default arch_randomize_brk() when the
architecture has not selected CONFIG_ARCH_HAS_ELF_RANDOMIZE.
Cc: Alexandre Ghiti <alex@ghiti.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/Kconfig | 1 -
fs/binfmt_elf.c | 3 ++-
include/linux/elf-randomize.h | 3 ++-
mm/util.c | 2 ++
4 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 26b8ed11639d..ef3ce947b7a1 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1000,7 +1000,6 @@ config HAVE_ARCH_COMPAT_MMAP_BASES
config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
bool
depends on MMU
- select ARCH_HAS_ELF_RANDOMIZE
config HAVE_STACK_VALIDATION
bool
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index f8c7f26f1fbb..28968a189a91 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1287,7 +1287,8 @@ static int load_elf_binary(struct linux_binprm *bprm)
* (since it grows up, and may collide early with the stack
* growing down), and into the unused ELF_ET_DYN_BASE region.
*/
- if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
+ if ((IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) ||
+ IS_ENABLED(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)) &&
elf_ex->e_type == ET_DYN && !interpreter) {
mm->brk = mm->start_brk = ELF_ET_DYN_BASE;
}
diff --git a/include/linux/elf-randomize.h b/include/linux/elf-randomize.h
index da0dbb7b6be3..1e471ca7caaf 100644
--- a/include/linux/elf-randomize.h
+++ b/include/linux/elf-randomize.h
@@ -4,7 +4,8 @@
struct mm_struct;
-#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE
+#if !defined(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && \
+ !defined(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)
static inline unsigned long arch_mmap_rnd(void) { return 0; }
# if defined(arch_randomize_brk) && defined(CONFIG_COMPAT_BRK)
# define compat_brk_randomized
diff --git a/mm/util.c b/mm/util.c
index e58151a61255..edb9e94cceb5 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -344,6 +344,7 @@ unsigned long randomize_stack_top(unsigned long stack_top)
}
#ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
+#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
/* Is the current task 32bit ? */
@@ -352,6 +353,7 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
return randomize_page(mm->brk, SZ_1G);
}
+#endif
unsigned long arch_mmap_rnd(void)
{
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
2021-11-22 8:48 ` [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT Christophe Leroy
@ 2021-11-22 11:22 ` Alex Ghiti
2021-11-22 11:47 ` Christophe Leroy
2021-11-23 0:22 ` kernel test robot
1 sibling, 1 reply; 22+ messages in thread
From: Alex Ghiti @ 2021-11-22 11:22 UTC (permalink / raw)
To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman
Cc: linux-kernel, linuxppc-dev, linux-mm
Hi Christophe,
Le 22/11/2021 à 09:48, Christophe Leroy a écrit :
> Commit e7142bf5d231 ("arm64, mm: make randomization selected by
> generic topdown mmap layout") introduced a default version of
> arch_randomize_brk() provided when
> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected.
>
> powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> but needs to provide its own arch_randomize_brk().
>
> In order to allow that, don't make
> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select
> CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that
> selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
> selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect.
This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used
somewhere else at some point, it is not natural to add
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak
function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK?
Thanks,
Alex
>
> Then only provide the default arch_randomize_brk() when the
> architecture has not selected CONFIG_ARCH_HAS_ELF_RANDOMIZE.
>
> Cc: Alexandre Ghiti <alex@ghiti.fr>
> Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
> ---
> arch/Kconfig | 1 -
> fs/binfmt_elf.c | 3 ++-
> include/linux/elf-randomize.h | 3 ++-
> mm/util.c | 2 ++
> 4 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 26b8ed11639d..ef3ce947b7a1 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -1000,7 +1000,6 @@ config HAVE_ARCH_COMPAT_MMAP_BASES
> config ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> bool
> depends on MMU
> - select ARCH_HAS_ELF_RANDOMIZE
>
> config HAVE_STACK_VALIDATION
> bool
> diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
> index f8c7f26f1fbb..28968a189a91 100644
> --- a/fs/binfmt_elf.c
> +++ b/fs/binfmt_elf.c
> @@ -1287,7 +1287,8 @@ static int load_elf_binary(struct linux_binprm *bprm)
> * (since it grows up, and may collide early with the stack
> * growing down), and into the unused ELF_ET_DYN_BASE region.
> */
> - if (IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) &&
> + if ((IS_ENABLED(CONFIG_ARCH_HAS_ELF_RANDOMIZE) ||
> + IS_ENABLED(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)) &&
> elf_ex->e_type == ET_DYN && !interpreter) {
> mm->brk = mm->start_brk = ELF_ET_DYN_BASE;
> }
> diff --git a/include/linux/elf-randomize.h b/include/linux/elf-randomize.h
> index da0dbb7b6be3..1e471ca7caaf 100644
> --- a/include/linux/elf-randomize.h
> +++ b/include/linux/elf-randomize.h
> @@ -4,7 +4,8 @@
>
> struct mm_struct;
>
> -#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE
> +#if !defined(CONFIG_ARCH_HAS_ELF_RANDOMIZE) && \
> + !defined(CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT)
> static inline unsigned long arch_mmap_rnd(void) { return 0; }
> # if defined(arch_randomize_brk) && defined(CONFIG_COMPAT_BRK)
> # define compat_brk_randomized
> diff --git a/mm/util.c b/mm/util.c
> index e58151a61255..edb9e94cceb5 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -344,6 +344,7 @@ unsigned long randomize_stack_top(unsigned long stack_top)
> }
>
> #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
> +#ifndef CONFIG_ARCH_HAS_ELF_RANDOMIZE
> unsigned long arch_randomize_brk(struct mm_struct *mm)
> {
> /* Is the current task 32bit ? */
> @@ -352,6 +353,7 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
>
> return randomize_page(mm->brk, SZ_1G);
> }
> +#endif
>
> unsigned long arch_mmap_rnd(void)
> {
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
2021-11-22 11:22 ` Alex Ghiti
@ 2021-11-22 11:47 ` Christophe Leroy
2021-11-22 12:57 ` Alexandre ghiti
0 siblings, 1 reply; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 11:47 UTC (permalink / raw)
To: Alex Ghiti, Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
Cc: linux-kernel, linuxppc-dev, linux-mm
Le 22/11/2021 à 12:22, Alex Ghiti a écrit :
> Hi Christophe,
>
> Le 22/11/2021 à 09:48, Christophe Leroy a écrit :
>> Commit e7142bf5d231 ("arm64, mm: make randomization selected by
>> generic topdown mmap layout") introduced a default version of
>> arch_randomize_brk() provided when
>> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected.
>>
>> powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
>> but needs to provide its own arch_randomize_brk().
>>
>> In order to allow that, don't make
>> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select
>> CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that
>> selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
>> selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect.
>
> This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used
> somewhere else at some point, it is not natural to add
> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak
> function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK?
Yes I also found things a bit weird.
CONFIG_ARCH_HAS_RANDOMIZE_BRK could be an idea but how different would
it be from CONFIG_ARCH_HAS_ELF_RANDOMIZE ? In fact I find it weird that
CONFIG_ARCH_HAS_ELF_RANDOMIZE is selected by
CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and not by the arch itself.
On the other hand CONFIG_ARCH_HAS_ELF_RANDOMIZE also handles
arch_mmap_rnd() and here we are talking about arch_randomize_brk() only.
In the begining I was thinking about adding a
CONFIG_ARCH_WANT_DEFAULT_RANDOMIZE_BRK, but it was meaning adding it to
the few other arches selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT.
So I think I will go for the __weak function option.
Thanks
Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
2021-11-22 11:47 ` Christophe Leroy
@ 2021-11-22 12:57 ` Alexandre ghiti
0 siblings, 0 replies; 22+ messages in thread
From: Alexandre ghiti @ 2021-11-22 12:57 UTC (permalink / raw)
To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman
Cc: linux-kernel, linuxppc-dev, linux-mm
On 11/22/21 12:47, Christophe Leroy wrote:
>
>
> Le 22/11/2021 à 12:22, Alex Ghiti a écrit :
>> Hi Christophe,
>>
>> Le 22/11/2021 à 09:48, Christophe Leroy a écrit :
>>> Commit e7142bf5d231 ("arm64, mm: make randomization selected by
>>> generic topdown mmap layout") introduced a default version of
>>> arch_randomize_brk() provided when
>>> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT is selected.
>>>
>>> powerpc could select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
>>> but needs to provide its own arch_randomize_brk().
>>>
>>> In order to allow that, don't make
>>> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT select
>>> CONFIG_ARCH_HAS_ELF_RANDOMIZE. Instead, ensure that
>>> selecting CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
>>> selecting CONFIG_ARCH_HAS_ELF_RANDOMIZE has the same effect.
>>
>> This feels weird to me since if CONFIG_ARCH_HAS_ELF_RANDOMIZE is used
>> somewhere else at some point, it is not natural to add
>> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT: can't we use a __weak
>> function or a new CONFIG_ARCH_HAS_RANDOMIZE_BRK?
>
>
> Yes I also found things a bit weird.
>
> CONFIG_ARCH_HAS_RANDOMIZE_BRK could be an idea but how different would
> it be from CONFIG_ARCH_HAS_ELF_RANDOMIZE ? In fact I find it weird
> that CONFIG_ARCH_HAS_ELF_RANDOMIZE is selected by
> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and not by the arch itself.
IIRC, this was a request from Kees Cook who wanted to enforce this
security measure.
>
> On the other hand CONFIG_ARCH_HAS_ELF_RANDOMIZE also handles
> arch_mmap_rnd() and here we are talking about arch_randomize_brk() only.
>
> In the begining I was thinking about adding a
> CONFIG_ARCH_WANT_DEFAULT_RANDOMIZE_BRK, but it was meaning adding it
> to the few other arches selecting
> CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT.
>
> So I think I will go for the __weak function option.
Ok, thanks.
Alex
>
> Thanks
> Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
2021-11-22 8:48 ` [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT Christophe Leroy
2021-11-22 11:22 ` Alex Ghiti
@ 2021-11-23 0:22 ` kernel test robot
1 sibling, 0 replies; 22+ messages in thread
From: kernel test robot @ 2021-11-23 0:22 UTC (permalink / raw)
To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
Michael Ellerman, alex
Cc: kbuild-all, Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
[-- Attachment #1: Type: text/plain, Size: 1798 bytes --]
Hi Christophe,
I love your patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on hnaz-mm/master linus/master v5.16-rc2 next-20211118]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
base: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: arm-randconfig-r005-20211122 (attached as .config)
compiler: arm-linux-gnueabi-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/e5949ff1a8e5cae8e9ac2ec3a39849bf2e73eb34
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Christophe-Leroy/Convert-powerpc-to-default-topdown-mmap-layout/20211122-165115
git checkout e5949ff1a8e5cae8e9ac2ec3a39849bf2e73eb34
# save the attached .config to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=arm SHELL=/bin/bash
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All errors (new ones prefixed by >>):
arm-linux-gnueabi-ld: fs/binfmt_elf.o: in function `load_elf_binary':
>> binfmt_elf.c:(.text+0x16d8): undefined reference to `arch_randomize_brk'
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 35957 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 7/8] powerpc/mm: Convert to default topdown mmap layout
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (5 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 6/8] mm: Allow arch specific arch_randomize_brk() with CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-22 8:48 ` [PATCH 8/8] powerpc/mm: Properly randomise mmap with slices Christophe Leroy
2021-11-24 13:21 ` [PATCH 0/8] Convert powerpc to default topdown mmap layout Nicholas Piggin
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Select CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT and
remove arch/powerpc/mm/mmap.c
This change provides standard randomisation of mmaps.
See commit 8b8addf891de ("x86/mm/32: Enable full randomization on i386
and X86_32") for all the benefits of mmap randomisation.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/processor.h | 2 -
arch/powerpc/mm/Makefile | 2 +-
arch/powerpc/mm/mmap.c | 105 ---------------------------
4 files changed, 2 insertions(+), 108 deletions(-)
delete mode 100644 arch/powerpc/mm/mmap.c
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index dea74d7717c0..05ddcf99cb34 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -158,6 +158,7 @@ config PPC
select ARCH_USE_MEMTEST
select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS
select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS
+ select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_WANT_IRQS_OFF_ACTIVATE_MM
select ARCH_WANT_LD_ORPHAN_WARN
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index e39bd0ff69f3..d906b14dd599 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -378,8 +378,6 @@ static inline void prefetchw(const void *x)
#define spin_lock_prefetch(x) prefetchw(x)
-#define HAVE_ARCH_PICK_MMAP_LAYOUT
-
/* asm stubs */
extern unsigned long isa300_idle_stop_noloss(unsigned long psscr_val);
extern unsigned long isa300_idle_stop_mayloss(unsigned long psscr_val);
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index d4c20484dad9..503a6e249940 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -5,7 +5,7 @@
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
-obj-y := fault.o mem.o pgtable.o mmap.o maccess.o pageattr.o \
+obj-y := fault.o mem.o pgtable.o maccess.o pageattr.o \
init_$(BITS).o pgtable_$(BITS).o \
pgtable-frag.o ioremap.o ioremap_$(BITS).o \
init-common.o mmu_context.o drmem.o \
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
deleted file mode 100644
index 5972d619d274..000000000000
--- a/arch/powerpc/mm/mmap.c
+++ /dev/null
@@ -1,105 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0-or-later
-/*
- * flexible mmap layout support
- *
- * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
- * All Rights Reserved.
- *
- * Started by Ingo Molnar <mingo@elte.hu>
- */
-
-#include <linux/personality.h>
-#include <linux/mm.h>
-#include <linux/random.h>
-#include <linux/sched/signal.h>
-#include <linux/sched/mm.h>
-#include <linux/elf-randomize.h>
-#include <linux/security.h>
-#include <linux/mman.h>
-
-/*
- * Top of mmap area (just below the process stack).
- *
- * Leave at least a ~128 MB hole.
- */
-#define MIN_GAP (128*1024*1024)
-#define MAX_GAP (TASK_SIZE/6*5)
-
-static inline int mmap_is_legacy(struct rlimit *rlim_stack)
-{
- if (current->personality & ADDR_COMPAT_LAYOUT)
- return 1;
-
- if (rlim_stack->rlim_cur == RLIM_INFINITY)
- return 1;
-
- return sysctl_legacy_va_layout;
-}
-
-unsigned long arch_mmap_rnd(void)
-{
- unsigned long shift, rnd;
-
- shift = mmap_rnd_bits;
-#ifdef CONFIG_COMPAT
- if (is_32bit_task())
- shift = mmap_rnd_compat_bits;
-#endif
- rnd = get_random_long() % (1ul << shift);
-
- return rnd << PAGE_SHIFT;
-}
-
-static inline unsigned long stack_maxrandom_size(void)
-{
- if (!(current->flags & PF_RANDOMIZE))
- return 0;
-
- /* 8MB for 32bit, 1GB for 64bit */
- if (is_32bit_task())
- return (1<<23);
- else
- return (1<<30);
-}
-
-static inline unsigned long mmap_base(unsigned long rnd,
- struct rlimit *rlim_stack)
-{
- unsigned long gap = rlim_stack->rlim_cur;
- unsigned long pad = stack_maxrandom_size() + stack_guard_gap;
-
- /* Values close to RLIM_INFINITY can overflow. */
- if (gap + pad > gap)
- gap += pad;
-
- if (gap < MIN_GAP)
- gap = MIN_GAP;
- else if (gap > MAX_GAP)
- gap = MAX_GAP;
-
- return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
-}
-
-/*
- * This function, called very early during the creation of a new
- * process VM image, sets up which VM layout function to use:
- */
-void arch_pick_mmap_layout(struct mm_struct *mm, struct rlimit *rlim_stack)
-{
- unsigned long random_factor = 0UL;
-
- if (current->flags & PF_RANDOMIZE)
- random_factor = arch_mmap_rnd();
-
- /*
- * Fall back to the standard layout if the personality
- * bit is set, or if the expected stack growth is unlimited:
- */
- if (mmap_is_legacy(rlim_stack)) {
- mm->mmap_base = TASK_UNMAPPED_BASE;
- mm->get_unmapped_area = arch_get_unmapped_area;
- } else {
- mm->mmap_base = mmap_base(random_factor, rlim_stack);
- mm->get_unmapped_area = arch_get_unmapped_area_topdown;
- }
-}
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 8/8] powerpc/mm: Properly randomise mmap with slices
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (6 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 7/8] powerpc/mm: Convert to default topdown mmap layout Christophe Leroy
@ 2021-11-22 8:48 ` Christophe Leroy
2021-11-24 13:21 ` [PATCH 0/8] Convert powerpc to default topdown mmap layout Nicholas Piggin
8 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-22 8:48 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, alex
Cc: Christophe Leroy, linux-kernel, linuxppc-dev, linux-mm
Now that powerpc switched to default topdown mmap layout,
mm->mmap_base is properly randomised. However
slice_find_area_bottomup() doesn't use mm->mmap_base but
uses the fixed TASK_UNMAPPED_BASE instead.
slice_find_area_bottomup() being used as a fallback to
slice_find_area_topdown(), it can't use mm->mmap_base
directly.
Instead of always using TASK_UNMAPPED_BASE as base address, leave
it to the caller. When called from slice_find_area_topdown()
TASK_UNMAPPED_BASE is used. Otherwise mm->mmap_base is used.
Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu>
---
arch/powerpc/mm/book3s64/slice.c | 18 +++++++-----------
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/slice.c b/arch/powerpc/mm/book3s64/slice.c
index 8327a43d29cb..0fef63763e6d 100644
--- a/arch/powerpc/mm/book3s64/slice.c
+++ b/arch/powerpc/mm/book3s64/slice.c
@@ -276,20 +276,18 @@ static bool slice_scan_available(unsigned long addr,
}
static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
- unsigned long len,
+ unsigned long addr, unsigned long len,
const struct slice_mask *available,
int psize, unsigned long high_limit)
{
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
- unsigned long addr, found, next_end;
+ unsigned long found, next_end;
struct vm_unmapped_area_info info;
info.flags = 0;
info.length = len;
info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
info.align_offset = 0;
-
- addr = TASK_UNMAPPED_BASE;
/*
* Check till the allow max value for this mmap request
*/
@@ -322,12 +320,12 @@ static unsigned long slice_find_area_bottomup(struct mm_struct *mm,
}
static unsigned long slice_find_area_topdown(struct mm_struct *mm,
- unsigned long len,
+ unsigned long addr, unsigned long len,
const struct slice_mask *available,
int psize, unsigned long high_limit)
{
int pshift = max_t(int, mmu_psize_defs[psize].shift, PAGE_SHIFT);
- unsigned long addr, found, prev;
+ unsigned long found, prev;
struct vm_unmapped_area_info info;
unsigned long min_addr = max(PAGE_SIZE, mmap_min_addr);
@@ -335,8 +333,6 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
info.length = len;
info.align_mask = PAGE_MASK & ((1ul << pshift) - 1);
info.align_offset = 0;
-
- addr = mm->mmap_base;
/*
* If we are trying to allocate above DEFAULT_MAP_WINDOW
* Add the different to the mmap_base.
@@ -377,7 +373,7 @@ static unsigned long slice_find_area_topdown(struct mm_struct *mm,
* can happen with large stack limits and large mmap()
* allocations.
*/
- return slice_find_area_bottomup(mm, len, available, psize, high_limit);
+ return slice_find_area_bottomup(mm, TASK_UNMAPPED_BASE, len, available, psize, high_limit);
}
@@ -386,9 +382,9 @@ static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len,
int topdown, unsigned long high_limit)
{
if (topdown)
- return slice_find_area_topdown(mm, len, mask, psize, high_limit);
+ return slice_find_area_topdown(mm, mm->mmap_base, len, mask, psize, high_limit);
else
- return slice_find_area_bottomup(mm, len, mask, psize, high_limit);
+ return slice_find_area_bottomup(mm, mm->mmap_base, len, mask, psize, high_limit);
}
static inline void slice_copy_mask(struct slice_mask *dst,
--
2.33.1
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 0/8] Convert powerpc to default topdown mmap layout
2021-11-22 8:48 [PATCH 0/8] Convert powerpc to default topdown mmap layout Christophe Leroy
` (7 preceding siblings ...)
2021-11-22 8:48 ` [PATCH 8/8] powerpc/mm: Properly randomise mmap with slices Christophe Leroy
@ 2021-11-24 13:21 ` Nicholas Piggin
2021-11-24 13:40 ` Christophe Leroy
8 siblings, 1 reply; 22+ messages in thread
From: Nicholas Piggin @ 2021-11-24 13:21 UTC (permalink / raw)
To: alex, Benjamin Herrenschmidt, Christophe Leroy, Michael Ellerman,
Paul Mackerras
Cc: linux-kernel, linux-mm, linuxppc-dev
Excerpts from Christophe Leroy's message of November 22, 2021 6:48 pm:
> This series converts powerpc to default topdown mmap layout.
>
> powerpc provides its own arch_get_unmapped_area() only when
> slices are needed, which is only for book3s/64. First part of
> the series moves slices into book3s/64 specific directories
> and cleans up other subarchitectures.
>
> Then a small modification is done to core mm to allow
> powerpc to still provide its own arch_randomize_brk()
>
> Last part converts to default topdown mmap layout.
A nice series but will clash badly with the CONFIG_HASH_MMU
series of course. One will have to be rebased if they are
both to be merged.
Thanks,
Nick
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 0/8] Convert powerpc to default topdown mmap layout
2021-11-24 13:21 ` [PATCH 0/8] Convert powerpc to default topdown mmap layout Nicholas Piggin
@ 2021-11-24 13:40 ` Christophe Leroy
2021-11-24 18:00 ` Christophe Leroy
0 siblings, 1 reply; 22+ messages in thread
From: Christophe Leroy @ 2021-11-24 13:40 UTC (permalink / raw)
To: Nicholas Piggin, alex, Benjamin Herrenschmidt, Michael Ellerman,
Paul Mackerras
Cc: linux-kernel, linux-mm, linuxppc-dev
Le 24/11/2021 à 14:21, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of November 22, 2021 6:48 pm:
>> This series converts powerpc to default topdown mmap layout.
>>
>> powerpc provides its own arch_get_unmapped_area() only when
>> slices are needed, which is only for book3s/64. First part of
>> the series moves slices into book3s/64 specific directories
>> and cleans up other subarchitectures.
>>
>> Then a small modification is done to core mm to allow
>> powerpc to still provide its own arch_randomize_brk()
>>
>> Last part converts to default topdown mmap layout.
>
> A nice series but will clash badly with the CONFIG_HASH_MMU
> series of course. One will have to be rebased if they are
> both to be merged.
>
No worry, it should be an issue.
If you already forsee that series being merged soon, I can rebase my
series on top of it just now.
Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 0/8] Convert powerpc to default topdown mmap layout
2021-11-24 13:40 ` Christophe Leroy
@ 2021-11-24 18:00 ` Christophe Leroy
0 siblings, 0 replies; 22+ messages in thread
From: Christophe Leroy @ 2021-11-24 18:00 UTC (permalink / raw)
To: Nicholas Piggin, alex, Benjamin Herrenschmidt, Michael Ellerman,
Paul Mackerras
Cc: linux-mm, linuxppc-dev, linux-kernel
Le 24/11/2021 à 14:40, Christophe Leroy a écrit :
>
>
> Le 24/11/2021 à 14:21, Nicholas Piggin a écrit :
>> Excerpts from Christophe Leroy's message of November 22, 2021 6:48 pm:
>>> This series converts powerpc to default topdown mmap layout.
>>>
>>> powerpc provides its own arch_get_unmapped_area() only when
>>> slices are needed, which is only for book3s/64. First part of
>>> the series moves slices into book3s/64 specific directories
>>> and cleans up other subarchitectures.
>>>
>>> Then a small modification is done to core mm to allow
>>> powerpc to still provide its own arch_randomize_brk()
>>>
>>> Last part converts to default topdown mmap layout.
>>
>> A nice series but will clash badly with the CONFIG_HASH_MMU
>> series of course. One will have to be rebased if they are
>> both to be merged.
>>
>
> No worry, it should be an issue.
>
> If you already forsee that series being merged soon, I can rebase my
> series on top of it just now.
>
In patchwork, v3 is flagged as superseded and I can't find a v4. Do you
have it somewhere ?
Christophe
^ permalink raw reply [flat|nested] 22+ messages in thread