LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCHv2] arm64:mm: free the useless initial page table
@ 2014-12-09  7:26 zhichang.yuan
  2015-01-23 16:21 ` Catalin Marinas
  0 siblings, 1 reply; 4+ messages in thread
From: zhichang.yuan @ 2014-12-09  7:26 UTC (permalink / raw)
  To: Catalin.Marinas, will.deacon
  Cc: linux-arm-kernel, linaro-kernel, linux-kernel, liguozhu, dsaxena,
	zhichang.yuan

From: "zhichang.yuan" <zhichang.yuan@linaro.org>

For 64K page system, after mapping a PMD section, the corresponding initial
page table is not needed any more. That page can be freed.

Changes since v1:

* make consistent code between alloc_init_pmd and alloc_init_pud;
* flush the TLB before the unused page table is freed;

Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
---
 arch/arm64/include/asm/pgtable.h |    3 +++
 arch/arm64/mm/mmu.c              |   15 ++++++++++++---
 2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 41a43bf..8a135b6 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -337,9 +337,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 
 #ifdef CONFIG_ARM64_64K_PAGES
 #define pud_sect(pud)		(0)
+#define pud_table(pud)		(1)
 #else
 #define pud_sect(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
 				 PUD_TYPE_SECT)
+#define pud_table(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
+					PUD_TYPE_TABLE)
 #endif
 
 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f4f8b50..515f75b 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -191,8 +191,14 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
 			 * Check for previous table entries created during
 			 * boot (__create_page_tables) and flush them.
 			 */
-			if (!pmd_none(old_pmd))
+			if (!pmd_none(old_pmd)) {
 				flush_tlb_all();
+				if (pmd_table(old_pmd)) {
+					phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0));
+
+					memblock_free(table, PAGE_SIZE);
+				}
+			}
 		} else {
 			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
 				       prot_pte);
@@ -234,9 +240,12 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
 			 * Look up the old pmd table and free it.
 			 */
 			if (!pud_none(old_pud)) {
-				phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
-				memblock_free(table, PAGE_SIZE);
 				flush_tlb_all();
+				if (pud_table(old_pud)) {
+					phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
+
+					memblock_free(table, PAGE_SIZE);
+				}
 			}
 		} else {
 			alloc_init_pmd(pud, addr, next, phys, map_io);
-- 
1.7.9.5


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCHv2] arm64:mm: free the useless initial page table
  2014-12-09  7:26 [PATCHv2] arm64:mm: free the useless initial page table zhichang.yuan
@ 2015-01-23 16:21 ` Catalin Marinas
  2015-01-23 17:40   ` Ard Biesheuvel
  0 siblings, 1 reply; 4+ messages in thread
From: Catalin Marinas @ 2015-01-23 16:21 UTC (permalink / raw)
  To: zhichang.yuan
  Cc: Will Deacon, linux-arm-kernel, linaro-kernel, linux-kernel,
	liguozhu, dsaxena, Laura Abbott, Ard Biesheuvel

On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
> 
> For 64K page system, after mapping a PMD section, the corresponding initial
> page table is not needed any more. That page can be freed.
> 
> Changes since v1:
> 
> * make consistent code between alloc_init_pmd and alloc_init_pud;
> * flush the TLB before the unused page table is freed;
> 
> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>

I thought about queuing this patch but I realised that
alloc_init_pmd/pud may be called in a late context where memblock_free()
would no longer make sense. Cc'ing Laura and Ard for any ideas here but
I think we may just end up with a few old page table pages sitting
around (not many though). In general we don't go from smaller to larger
mappings (that's what this patch targets) but given the API, I'm not
sure we have any guarantee.

One solution would be to check for alloc == early_alloc or something
similar. Cc'ing Laura and Ard as they added the create_mapping_late()
code.

> ---
>  arch/arm64/include/asm/pgtable.h |    3 +++
>  arch/arm64/mm/mmu.c              |   15 ++++++++++++---
>  2 files changed, 15 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 41a43bf..8a135b6 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -337,9 +337,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>  
>  #ifdef CONFIG_ARM64_64K_PAGES
>  #define pud_sect(pud)		(0)
> +#define pud_table(pud)		(1)
>  #else
>  #define pud_sect(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
>  				 PUD_TYPE_SECT)
> +#define pud_table(pud)		((pud_val(pud) & PUD_TYPE_MASK) == \
> +					PUD_TYPE_TABLE)
>  #endif
>  
>  static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index f4f8b50..515f75b 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -191,8 +191,14 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
>  			 * Check for previous table entries created during
>  			 * boot (__create_page_tables) and flush them.
>  			 */
> -			if (!pmd_none(old_pmd))
> +			if (!pmd_none(old_pmd)) {
>  				flush_tlb_all();
> +				if (pmd_table(old_pmd)) {
> +					phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0));
> +
> +					memblock_free(table, PAGE_SIZE);
> +				}
> +			}
>  		} else {
>  			alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
>  				       prot_pte);
> @@ -234,9 +240,12 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
>  			 * Look up the old pmd table and free it.
>  			 */
>  			if (!pud_none(old_pud)) {
> -				phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
> -				memblock_free(table, PAGE_SIZE);
>  				flush_tlb_all();
> +				if (pud_table(old_pud)) {
> +					phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
> +
> +					memblock_free(table, PAGE_SIZE);
> +				}
>  			}
>  		} else {
>  			alloc_init_pmd(pud, addr, next, phys, map_io);
> -- 
> 1.7.9.5

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCHv2] arm64:mm: free the useless initial page table
  2015-01-23 16:21 ` Catalin Marinas
@ 2015-01-23 17:40   ` Ard Biesheuvel
  2015-01-28 12:10     ` Catalin Marinas
  0 siblings, 1 reply; 4+ messages in thread
From: Ard Biesheuvel @ 2015-01-23 17:40 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: zhichang.yuan, Will Deacon, linux-arm-kernel, linaro-kernel,
	linux-kernel, liguozhu, dsaxena, Laura Abbott

On 23 January 2015 at 16:21, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
>> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
>>
>> For 64K page system, after mapping a PMD section, the corresponding initial
>> page table is not needed any more. That page can be freed.
>>
>> Changes since v1:
>>
>> * make consistent code between alloc_init_pmd and alloc_init_pud;
>> * flush the TLB before the unused page table is freed;
>>
>> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
>
> I thought about queuing this patch but I realised that
> alloc_init_pmd/pud may be called in a late context where memblock_free()
> would no longer make sense. Cc'ing Laura and Ard for any ideas here but
> I think we may just end up with a few old page table pages sitting
> around (not many though). In general we don't go from smaller to larger
> mappings (that's what this patch targets) but given the API, I'm not
> sure we have any guarantee.
>
> One solution would be to check for alloc == early_alloc or something
> similar. Cc'ing Laura and Ard as they added the create_mapping_late()
> code.
>

The UEFI page tables are only built up once, based on a series of
disjoint memory regions, so that will never hit either of the
memblock_free() branches.
And AFAICT, the DEBUG_RODATA code does the splitting early, which
causes the create_mapping_late() code to only change permissions, not
change the granularity of any regions.

Perhaps it's sufficient to add a comment and a BUG_ON(alloc !=
early_alloc) to the memblock_free() branches?

-- 
Ard.


>> ---
>>  arch/arm64/include/asm/pgtable.h |    3 +++
>>  arch/arm64/mm/mmu.c              |   15 ++++++++++++---
>>  2 files changed, 15 insertions(+), 3 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 41a43bf..8a135b6 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -337,9 +337,12 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>>
>>  #ifdef CONFIG_ARM64_64K_PAGES
>>  #define pud_sect(pud)                (0)
>> +#define pud_table(pud)               (1)
>>  #else
>>  #define pud_sect(pud)                ((pud_val(pud) & PUD_TYPE_MASK) == \
>>                                PUD_TYPE_SECT)
>> +#define pud_table(pud)               ((pud_val(pud) & PUD_TYPE_MASK) == \
>> +                                     PUD_TYPE_TABLE)
>>  #endif
>>
>>  static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index f4f8b50..515f75b 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -191,8 +191,14 @@ static void __init alloc_init_pmd(pud_t *pud, unsigned long addr,
>>                        * Check for previous table entries created during
>>                        * boot (__create_page_tables) and flush them.
>>                        */
>> -                     if (!pmd_none(old_pmd))
>> +                     if (!pmd_none(old_pmd)) {
>>                               flush_tlb_all();
>> +                             if (pmd_table(old_pmd)) {
>> +                                     phys_addr_t table = __pa(pte_offset_map(&old_pmd, 0));
>> +
>> +                                     memblock_free(table, PAGE_SIZE);
>> +                             }
>> +                     }
>>               } else {
>>                       alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
>>                                      prot_pte);
>> @@ -234,9 +240,12 @@ static void __init alloc_init_pud(pgd_t *pgd, unsigned long addr,
>>                        * Look up the old pmd table and free it.
>>                        */
>>                       if (!pud_none(old_pud)) {
>> -                             phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
>> -                             memblock_free(table, PAGE_SIZE);
>>                               flush_tlb_all();
>> +                             if (pud_table(old_pud)) {
>> +                                     phys_addr_t table = __pa(pmd_offset(&old_pud, 0));
>> +
>> +                                     memblock_free(table, PAGE_SIZE);
>> +                             }
>>                       }
>>               } else {
>>                       alloc_init_pmd(pud, addr, next, phys, map_io);
>> --
>> 1.7.9.5

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCHv2] arm64:mm: free the useless initial page table
  2015-01-23 17:40   ` Ard Biesheuvel
@ 2015-01-28 12:10     ` Catalin Marinas
  0 siblings, 0 replies; 4+ messages in thread
From: Catalin Marinas @ 2015-01-28 12:10 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: zhichang.yuan, Will Deacon, linux-arm-kernel, linaro-kernel,
	linux-kernel, liguozhu, dsaxena, Laura Abbott

On Fri, Jan 23, 2015 at 05:40:40PM +0000, Ard Biesheuvel wrote:
> On 23 January 2015 at 16:21, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Tue, Dec 09, 2014 at 07:26:47AM +0000, zhichang.yuan@linaro.org wrote:
> >> From: "zhichang.yuan" <zhichang.yuan@linaro.org>
> >>
> >> For 64K page system, after mapping a PMD section, the corresponding initial
> >> page table is not needed any more. That page can be freed.
> >>
> >> Changes since v1:
> >>
> >> * make consistent code between alloc_init_pmd and alloc_init_pud;
> >> * flush the TLB before the unused page table is freed;
> >>
> >> Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org>
> >
> > I thought about queuing this patch but I realised that
> > alloc_init_pmd/pud may be called in a late context where memblock_free()
> > would no longer make sense. Cc'ing Laura and Ard for any ideas here but
> > I think we may just end up with a few old page table pages sitting
> > around (not many though). In general we don't go from smaller to larger
> > mappings (that's what this patch targets) but given the API, I'm not
> > sure we have any guarantee.
> >
> > One solution would be to check for alloc == early_alloc or something
> > similar. Cc'ing Laura and Ard as they added the create_mapping_late()
> > code.
> >
> 
> The UEFI page tables are only built up once, based on a series of
> disjoint memory regions, so that will never hit either of the
> memblock_free() branches.
> And AFAICT, the DEBUG_RODATA code does the splitting early, which
> causes the create_mapping_late() code to only change permissions, not
> change the granularity of any regions.
> 
> Perhaps it's sufficient to add a comment and a BUG_ON(alloc !=
> early_alloc) to the memblock_free() branches?

Thanks for confirming, I merged this patch together with BUG_ON(), just
in case.

-- 
Catalin

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-01-28 20:28 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-12-09  7:26 [PATCHv2] arm64:mm: free the useless initial page table zhichang.yuan
2015-01-23 16:21 ` Catalin Marinas
2015-01-23 17:40   ` Ard Biesheuvel
2015-01-28 12:10     ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).