Linux-Fsdevel Archive on lore.kernel.org help / color / mirror / Atom feed
From: ira.weiny@intel.com To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org> Cc: Ira Weiny <ira.weiny@intel.com>, x86@kernel.org, Dave Hansen <dave.hansen@linux.intel.com>, Dan Williams <dan.j.williams@intel.com>, Vishal Verma <vishal.l.verma@intel.com>, Andrew Morton <akpm@linux-foundation.org>, Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org Subject: [RFC PATCH 12/15] kmap: Add stray write protection for device pages Date: Tue, 14 Jul 2020 00:02:17 -0700 [thread overview] Message-ID: <20200714070220.3500839-13-ira.weiny@intel.com> (raw) In-Reply-To: <20200714070220.3500839-1-ira.weiny@intel.com> From: Ira Weiny <ira.weiny@intel.com> Device managed pages may have additional protections. These protections need to be removed prior to valid use by kernel users. Check for special treatment of device managed pages in kmap and take action if needed. We use kmap as an interface for generic kernel code because under normal circumstances it would be a bug for general kernel code to not use kmap prior to accessing kernel memory. Therefore, this should allow any valid kernel users to seamlessly use these pages without issues. Signed-off-by: Ira Weiny <ira.weiny@intel.com> --- include/linux/highmem.h | 32 +++++++++++++++++++++++++++++++- 1 file changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index d6e82e3de027..7f809d8d5a94 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -8,6 +8,7 @@ #include <linux/mm.h> #include <linux/uaccess.h> #include <linux/hardirq.h> +#include <linux/memremap.h> #include <asm/cacheflush.h> @@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size) #include <asm/kmap_types.h> +static inline void enable_access(struct page *page) +{ + if (!page_is_access_protected(page)) + return; + dev_access_enable(); +} + +static inline void disable_access(struct page *page) +{ + if (!page_is_access_protected(page)) + return; + dev_access_disable(); +} + #ifdef CONFIG_HIGHMEM extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); extern void kunmap_atomic_high(void *kvaddr); @@ -55,6 +70,11 @@ static inline void *kmap(struct page *page) else addr = kmap_high(page); kmap_flush_tlb((unsigned long)addr); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially enabled. + */ + enable_access(page); return addr; } @@ -63,6 +83,11 @@ void kunmap_high(struct page *page); static inline void kunmap(struct page *page) { might_sleep(); + /* + * Even non-highmem pages may have additional access protections which + * need to be checked and potentially disabled. + */ + disable_access(page); if (!PageHighMem(page)) return; kunmap_high(page); @@ -85,6 +110,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); pagefault_disable(); + enable_access(page); if (!PageHighMem(page)) return page_address(page); return kmap_atomic_high_prot(page, prot); @@ -137,6 +163,7 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; } static inline void *kmap(struct page *page) { might_sleep(); + enable_access(page); return page_address(page); } @@ -146,6 +173,7 @@ static inline void kunmap_high(struct page *page) static inline void kunmap(struct page *page) { + disable_access(page); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -155,6 +183,7 @@ static inline void *kmap_atomic(struct page *page) { preempt_disable(); pagefault_disable(); + enable_access(page); return page_address(page); } #define kmap_atomic_prot(page, prot) kmap_atomic(page) @@ -216,7 +245,8 @@ static inline void kmap_atomic_idx_pop(void) #define kunmap_atomic(addr) \ do { \ BUILD_BUG_ON(__same_type((addr), struct page *)); \ - kunmap_atomic_high(addr); \ + disable_access(kmap_to_page(addr)); \ + kunmap_atomic_high(addr); \ pagefault_enable(); \ preempt_enable(); \ } while (0) -- 2.25.1
next prev parent reply other threads:[~2020-07-14 7:04 UTC|newest] Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-07-14 7:02 [RFC PATCH 00/15] PKS: Add Protection Keys Supervisor (PKS) support ira.weiny 2020-07-14 7:02 ` [RFC PATCH 01/15] x86/pkeys: Create pkeys_internal.h ira.weiny 2020-07-14 7:02 ` [RFC PATCH 02/15] x86/fpu: Refactor arch_set_user_pkey_access() for PKS support ira.weiny 2020-07-14 7:02 ` [RFC PATCH 03/15] x86/pks: Enable Protection Keys Supervisor (PKS) ira.weiny 2020-07-14 7:02 ` [RFC PATCH 04/15] x86/pks: Preserve the PKRS MSR on context switch ira.weiny 2020-07-14 8:27 ` Peter Zijlstra 2020-07-14 18:53 ` Ira Weiny 2020-07-14 18:56 ` Dave Hansen 2020-07-14 19:05 ` Peter Zijlstra 2020-07-14 19:09 ` Ira Weiny 2020-07-14 7:02 ` [RFC PATCH 05/15] x86/pks: Add PKS kernel API ira.weiny 2020-07-14 7:02 ` [RFC PATCH 06/15] x86/pks: Add a debugfs file for allocated PKS keys ira.weiny 2020-07-14 7:02 ` [RFC PATCH 07/15] Documentation/pkeys: Update documentation for kernel pkeys ira.weiny 2020-07-14 7:02 ` [RFC PATCH 08/15] x86/pks: Add PKS Test code ira.weiny 2020-07-14 7:02 ` [RFC PATCH 09/15] fs/dax: Remove unused size parameter ira.weiny 2020-07-14 7:02 ` [RFC PATCH 10/15] drivers/dax: Expand lock scope to cover the use of addresses ira.weiny 2020-07-14 7:02 ` [RFC PATCH 11/15] memremap: Add zone device access protection ira.weiny 2020-07-14 8:40 ` Peter Zijlstra 2020-07-14 19:10 ` Ira Weiny 2020-07-14 19:40 ` Peter Zijlstra 2020-07-14 7:02 ` ira.weiny [this message] 2020-07-14 8:44 ` [RFC PATCH 12/15] kmap: Add stray write protection for device pages Peter Zijlstra 2020-07-14 19:06 ` Ira Weiny 2020-07-14 19:29 ` Peter Zijlstra 2020-07-14 19:42 ` Dave Hansen 2020-07-14 19:49 ` Peter Zijlstra 2020-07-14 20:00 ` Ira Weiny 2020-07-14 7:02 ` [RFC PATCH 13/15] dax: Stray write protection for dax_direct_access() ira.weiny 2020-07-14 7:02 ` [RFC PATCH 14/15] nvdimm/pmem: Stray write protection for pmem->virt_addr ira.weiny 2020-07-14 7:02 ` [RFC PATCH 15/15] [dax|pmem]: Enable stray write protection ira.weiny
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200714070220.3500839-13-ira.weiny@intel.com \ --to=ira.weiny@intel.com \ --cc=akpm@linux-foundation.org \ --cc=bp@alien8.de \ --cc=dan.j.williams@intel.com \ --cc=dave.hansen@linux.intel.com \ --cc=fenghua.yu@intel.com \ --cc=linux-doc@vger.kernel.org \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-kselftest@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ --cc=luto@kernel.org \ --cc=mingo@redhat.com \ --cc=peterz@infradead.org \ --cc=tglx@linutronix.de \ --cc=vishal.l.verma@intel.com \ --cc=x86@kernel.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).