LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Andrey Konovalov <andreyknvl@google.com>
To: "Andrey Ryabinin" <aryabinin@virtuozzo.com>,
	"Alexander Potapenko" <glider@google.com>,
	"Dmitry Vyukov" <dvyukov@google.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Catalin Marinas" <catalin.marinas@arm.com>,
	"Will Deacon" <will.deacon@arm.com>,
	"Christopher Li" <sparse@chrisli.org>,
	"Christoph Lameter" <cl@linux.com>,
	"Pekka Enberg" <penberg@kernel.org>,
	"David Rientjes" <rientjes@google.com>,
	"Joonsoo Kim" <iamjoonsoo.kim@lge.com>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"Masahiro Yamada" <yamada.masahiro@socionext.com>,
	"Michal Marek" <michal.lkml@markovi.net>,
	"Andrey Konovalov" <andreyknvl@google.com>,
	"Mark Rutland" <mark.rutland@arm.com>,
	"Nick Desaulniers" <ndesaulniers@google.com>,
	"Yury Norov" <ynorov@caviumnetworks.com>,
	"Marc Zyngier" <marc.zyngier@arm.com>,
	"Kristina Martsenko" <kristina.martsenko@arm.com>,
	"Suzuki K Poulose" <suzuki.poulose@arm.com>,
	"Punit Agrawal" <punit.agrawal@arm.com>,
	"Dave Martin" <dave.martin@arm.com>,
	"Ard Biesheuvel" <ard.biesheuvel@linaro.org>,
	"James Morse" <james.morse@arm.com>,
	"Michael Weiser" <michael.weiser@gmx.de>,
	"Julien Thierry" <julien.thierry@arm.com>,
	"Tyler Baicar" <tbaicar@codeaurora.org>,
	"Eric W . Biederman" <ebiederm@xmission.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@kernel.org>,
	"Kees Cook" <keescook@chromium.org>,
	"Sandipan Das" <sandipan@linux.vnet.ibm.com>,
	"David Woodhouse" <dwmw@amazon.co.uk>,
	"Paul Lawrence" <paullawrence@google.com>,
	"Herbert Xu" <herbert@gondor.apana.org.au>,
	"Josh Poimboeuf" <jpoimboe@redhat.com>,
	"Geert Uytterhoeven" <geert@linux-m68k.org>,
	"Tom Lendacky" <thomas.lendacky@amd.com>,
	"Arnd Bergmann" <arnd@arndb.de>,
	"Dan Williams" <dan.j.williams@intel.com>,
	"Michal Hocko" <mhocko@suse.com>, "Jan Kara" <jack@suse.cz>,
	"Ross Zwisler" <ross.zwisler@linux.intel.com>,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Matthew Wilcox" <mawilcox@microsoft.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
	"Souptick Joarder" <jrdr.linux@gmail.com>,
	"Hugh Dickins" <hughd@google.com>,
	"Davidlohr Bueso" <dave@stgolabs.net>,
	"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
	"Philippe Ombredanne" <pombredanne@nexb.com>,
	"Kate Stewart" <kstewart@linuxfoundation.org>,
	"Laura Abbott" <labbott@redhat.com>,
	"Boris Brezillon" <boris.brezillon@bootlin.com>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Pintu Agarwal" <pintu.ping@gmail.com>,
	"Doug Berger" <opendmb@gmail.com>,
	"Anshuman Khandual" <khandual@linux.vnet.ibm.com>,
	"Mike Rapoport" <rppt@linux.vnet.ibm.com>,
	"Mel Gorman" <mgorman@suse.de>,
	"Pavel Tatashin" <pasha.tatashin@oracle.com>,
	"Tetsuo Handa" <penguin-kernel@I-love.SAKURA.ne.jp>,
	kasan-dev@googlegroups.com, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-sparse@vger.kernel.org, linux-mm@kvack.org,
	linux-kbuild@vger.kernel.org
Cc: Kostya Serebryany <kcc@google.com>,
	Evgeniy Stepanov <eugenis@google.com>,
	Lee Smith <Lee.Smith@arm.com>,
	Ramana Radhakrishnan <Ramana.Radhakrishnan@arm.com>,
	Jacob Bramley <Jacob.Bramley@arm.com>,
	Ruben Ayrapetyan <Ruben.Ayrapetyan@arm.com>,
	Kees Cook <keescook@google.com>, Jann Horn <jannh@google.com>,
	Mark Brand <markbrand@google.com>,
	Chintan Pandya <cpandya@codeaurora.org>
Subject: [PATCH v1 15/16] khwasan, mm, arm64: tag non slab memory allocated via pagealloc
Date: Tue,  8 May 2018 19:21:01 +0200	[thread overview]
Message-ID: <52d2542323262ede3510754bb07cbc1ed8c347b0.1525798754.git.andreyknvl@google.com> (raw)
In-Reply-To: <cover.1525798753.git.andreyknvl@google.com>
In-Reply-To: <cover.1525798753.git.andreyknvl@google.com>

KWHASAN doesn't check memory accesses through pointers tagged with 0xff.
When page_address is used to get pointer to memory that corresponds to
some page, the tag of the resulting pointer gets set to 0xff, even though
the allocated memory might have been tagged differently.

For slab pages it's impossible to recover the correct tag to return from
page_address, since the page might contain multiple slab objects tagged
with different values, and we can't know in advance which one of them is
going to get accessed. For non slab pages however, we can recover the tag
in page_address, since the whole page was marked with the same tag.

This patch adds tagging to non slab memory allocated with pagealloc. To
set the tag of the pointer returned from page_address, the tag gets stored
to page->flags when the memory gets allocated.

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
 arch/arm64/include/asm/memory.h   | 11 +++++++++++
 include/linux/mm.h                | 29 +++++++++++++++++++++++++++++
 include/linux/page-flags-layout.h | 10 ++++++++++
 mm/cma.c                          |  1 +
 mm/kasan/common.c                 | 15 +++++++++++++--
 mm/page_alloc.c                   |  1 +
 6 files changed, 65 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index f206273469b5..9ec78a44c5ff 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -304,7 +304,18 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
 #define __page_to_voff(kaddr)	(((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
 
+#ifndef CONFIG_KASAN_HW
 #define page_to_virt(page)	((void *)((__page_to_voff(page)) | PAGE_OFFSET))
+#else
+#define page_to_virt(page)	({					\
+	unsigned long __addr =						\
+		((__page_to_voff(page)) | PAGE_OFFSET);			\
+	if (!PageSlab((struct page *)page))				\
+		__addr = KASAN_SET_TAG(__addr, page_kasan_tag(page));	\
+	((void *)__addr);						\
+})
+#endif
+
 #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
 
 #define _virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1ac1f06a4be6..d6d596824803 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -770,6 +770,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_PGOFF		(SECTIONS_PGOFF - NODES_WIDTH)
 #define ZONES_PGOFF		(NODES_PGOFF - ZONES_WIDTH)
 #define LAST_CPUPID_PGOFF	(ZONES_PGOFF - LAST_CPUPID_WIDTH)
+#define KASAN_TAG_PGOFF		(LAST_CPUPID_PGOFF - KASAN_TAG_WIDTH)
 
 /*
  * Define the bit shifts to access each section.  For non-existent
@@ -780,6 +781,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_PGSHIFT		(NODES_PGOFF * (NODES_WIDTH != 0))
 #define ZONES_PGSHIFT		(ZONES_PGOFF * (ZONES_WIDTH != 0))
 #define LAST_CPUPID_PGSHIFT	(LAST_CPUPID_PGOFF * (LAST_CPUPID_WIDTH != 0))
+#define KASAN_TAG_PGSHIFT	(KASAN_TAG_PGOFF * (KASAN_TAG_WIDTH != 0))
 
 /* NODE:ZONE or SECTION:ZONE is used to ID a zone for the buddy allocator */
 #ifdef NODE_NOT_IN_PAGE_FLAGS
@@ -802,6 +804,7 @@ int finish_mkwrite_fault(struct vm_fault *vmf);
 #define NODES_MASK		((1UL << NODES_WIDTH) - 1)
 #define SECTIONS_MASK		((1UL << SECTIONS_WIDTH) - 1)
 #define LAST_CPUPID_MASK	((1UL << LAST_CPUPID_SHIFT) - 1)
+#define KASAN_TAG_MASK		((1UL << KASAN_TAG_WIDTH) - 1)
 #define ZONEID_MASK		((1UL << ZONEID_SHIFT) - 1)
 
 static inline enum zone_type page_zonenum(const struct page *page)
@@ -1021,6 +1024,32 @@ static inline bool cpupid_match_pid(struct task_struct *task, int cpupid)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+#ifdef CONFIG_KASAN_HW
+static inline u8 page_kasan_tag(const struct page *page)
+{
+	return (page->flags >> KASAN_TAG_PGSHIFT) & KASAN_TAG_MASK;
+}
+
+static inline void page_kasan_tag_set(struct page *page, u8 tag)
+{
+	page->flags &= ~(KASAN_TAG_MASK << KASAN_TAG_PGSHIFT);
+	page->flags |= (tag & KASAN_TAG_MASK) << KASAN_TAG_PGSHIFT;
+}
+
+static inline void page_kasan_tag_reset(struct page *page)
+{
+	page_kasan_tag_set(page, 0xff);
+}
+#else
+static inline u8 page_kasan_tag(const struct page *page)
+{
+	return 0xff;
+}
+
+static inline void page_kasan_tag_set(struct page *page, u8 tag) { }
+static inline void page_kasan_tag_reset(struct page *page) { }
+#endif
+
 static inline struct zone *page_zone(const struct page *page)
 {
 	return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)];
diff --git a/include/linux/page-flags-layout.h b/include/linux/page-flags-layout.h
index 7ec86bf31ce4..8dbad17664c2 100644
--- a/include/linux/page-flags-layout.h
+++ b/include/linux/page-flags-layout.h
@@ -82,6 +82,16 @@
 #define LAST_CPUPID_WIDTH 0
 #endif
 
+#ifdef CONFIG_KASAN_HW
+#define KASAN_TAG_WIDTH 8
+#if SECTIONS_WIDTH+NODES_WIDTH+ZONES_WIDTH+LAST_CPUPID_WIDTH+KASAN_TAG_WIDTH \
+	> BITS_PER_LONG - NR_PAGEFLAGS
+#error "KASAN: not enough bits in page flags for tag"
+#endif
+#else
+#define KASAN_TAG_WIDTH 0
+#endif
+
 /*
  * We are going to use the flags for the page to node mapping if its in
  * there.  This includes the case where there is no node, so it is implicit.
diff --git a/mm/cma.c b/mm/cma.c
index aa40e6c7b042..f657db289bba 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -526,6 +526,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align,
 	}
 
 	trace_cma_alloc(pfn, page, count, align);
+	page_kasan_tag_reset(page);
 
 	if (ret && !(gfp_mask & __GFP_NOWARN)) {
 		pr_err("%s: alloc failed, req-size: %zu pages, ret: %d\n",
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 0654bf97257b..7cd4a4e8c3be 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -207,8 +207,18 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
 
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
+#ifdef CONFIG_KASAN_GENERIC
 	if (likely(!PageHighMem(page)))
 		kasan_unpoison_shadow(page_address(page), PAGE_SIZE << order);
+#else
+	if (!PageSlab(page)) {
+		u8 tag = random_tag();
+
+		kasan_poison_shadow(page_address(page), PAGE_SIZE << order,
+					tag);
+		page_kasan_tag_set(page, tag);
+	}
+#endif
 }
 
 void kasan_free_pages(struct page *page, unsigned int order)
@@ -433,6 +443,7 @@ void *kasan_kmalloc_large(const void *ptr, size_t size, gfp_t flags)
 #else
 	tag = random_tag();
 	kasan_poison_shadow(ptr, redzone_start - (unsigned long)ptr, tag);
+	page_kasan_tag_set(page, tag);
 #endif
 	kasan_poison_shadow((void *)redzone_start, redzone_end - redzone_start,
 		KASAN_PAGE_REDZONE);
@@ -462,7 +473,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
 	page = virt_to_head_page(ptr);
 
 	if (unlikely(!PageSlab(page))) {
-		if (reset_tag(ptr) != page_address(page)) {
+		if (ptr != page_address(page)) {
 			kasan_report_invalid_free(ptr, ip);
 			return;
 		}
@@ -475,7 +486,7 @@ void kasan_poison_kfree(void *ptr, unsigned long ip)
 
 void kasan_kfree_large(void *ptr, unsigned long ip)
 {
-	if (reset_tag(ptr) != page_address(virt_to_head_page(ptr)))
+	if (ptr != page_address(virt_to_head_page(ptr)))
 		kasan_report_invalid_free(ptr, ip);
 	/* The object will be poisoned by page_alloc. */
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 905db9d7962f..54df9c852c6e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1179,6 +1179,7 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
 	init_page_count(page);
 	page_mapcount_reset(page);
 	page_cpupid_reset_last(page);
+	page_kasan_tag_reset(page);
 
 	INIT_LIST_HEAD(&page->lru);
 #ifdef WANT_PAGE_VIRTUAL
-- 
2.17.0.441.gb46fe60e1d-goog

  parent reply	other threads:[~2018-05-08 17:21 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-08 17:20 [PATCH v1 00/16] khwasan: kernel hardware assisted address sanitizer Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 01/16] khwasan, mm: change kasan hooks signatures Andrey Konovalov
2018-05-14 16:56   ` Andrey Ryabinin
2018-05-08 17:20 ` [PATCH v1 02/16] khwasan: move common kasan and khwasan code to common.c Andrey Konovalov
2018-05-15 13:28   ` Andrey Ryabinin
2018-05-08 17:20 ` [PATCH v1 03/16] khwasan: add CONFIG_KASAN_GENERIC and CONFIG_KASAN_HW Andrey Konovalov
2018-05-14 16:57   ` Andrey Ryabinin
2018-05-08 17:20 ` [PATCH v1 04/16] khwasan, arm64: adjust shadow size for CONFIG_KASAN_HW Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 05/16] khwasan: initialize shadow to 0xff Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 06/16] khwasan, arm64: untag virt address in __kimg_to_phys Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 07/16] khwasan, arm64: fix up fault handling logic Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 08/16] khwasan: add tag related helper functions Andrey Konovalov
2018-05-14 16:58   ` Andrey Ryabinin
2018-05-08 17:20 ` [PATCH v1 09/16] khwasan, arm64: enable top byte ignore for the kernel Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 10/16] khwasan, mm: perform untagged pointers comparison in krealloc Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 11/16] khwasan: split out kasan_report.c from report.c Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 12/16] khwasan: add bug reporting routines Andrey Konovalov
2018-05-08 17:20 ` [PATCH v1 13/16] khwasan: add hooks implementation Andrey Konovalov
2018-05-15 13:13   ` Andrey Ryabinin
2018-05-08 17:21 ` [PATCH v1 14/16] khwasan, arm64: add brk handler for inline instrumentation Andrey Konovalov
2018-05-08 17:21 ` Andrey Konovalov [this message]
2018-05-15 14:06   ` [PATCH v1 15/16] khwasan, mm, arm64: tag non slab memory allocated via pagealloc Andrey Ryabinin
2018-05-08 17:21 ` [PATCH v1 16/16] khwasan: update kasan documentation Andrey Konovalov
2018-05-09  5:26   ` Mike Rapoport
2018-05-09 12:13     ` Andrey Konovalov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52d2542323262ede3510754bb07cbc1ed8c347b0.1525798754.git.andreyknvl@google.com \
    --to=andreyknvl@google.com \
    --cc=Jacob.Bramley@arm.com \
    --cc=Lee.Smith@arm.com \
    --cc=Ramana.Radhakrishnan@arm.com \
    --cc=Ruben.Ayrapetyan@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=ard.biesheuvel@linaro.org \
    --cc=arnd@arndb.de \
    --cc=aryabinin@virtuozzo.com \
    --cc=boris.brezillon@bootlin.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=corbet@lwn.net \
    --cc=cpandya@codeaurora.org \
    --cc=dan.j.williams@intel.com \
    --cc=dave.martin@arm.com \
    --cc=dave@stgolabs.net \
    --cc=dvyukov@google.com \
    --cc=dwmw@amazon.co.uk \
    --cc=ebiederm@xmission.com \
    --cc=eugenis@google.com \
    --cc=geert@linux-m68k.org \
    --cc=glider@google.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jack@suse.cz \
    --cc=james.morse@arm.com \
    --cc=jannh@google.com \
    --cc=jglisse@redhat.com \
    --cc=jpoimboe@redhat.com \
    --cc=jrdr.linux@gmail.com \
    --cc=julien.thierry@arm.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=kcc@google.com \
    --cc=keescook@chromium.org \
    --cc=keescook@google.com \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kristina.martsenko@arm.com \
    --cc=kstewart@linuxfoundation.org \
    --cc=labbott@redhat.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kbuild@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-sparse@vger.kernel.org \
    --cc=marc.zyngier@arm.com \
    --cc=mark.rutland@arm.com \
    --cc=markbrand@google.com \
    --cc=mawilcox@microsoft.com \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.com \
    --cc=michael.weiser@gmx.de \
    --cc=michal.lkml@markovi.net \
    --cc=mingo@kernel.org \
    --cc=ndesaulniers@google.com \
    --cc=opendmb@gmail.com \
    --cc=pasha.tatashin@oracle.com \
    --cc=paullawrence@google.com \
    --cc=penberg@kernel.org \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=pintu.ping@gmail.com \
    --cc=pombredanne@nexb.com \
    --cc=punit.agrawal@arm.com \
    --cc=rientjes@google.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=rppt@linux.vnet.ibm.com \
    --cc=sandipan@linux.vnet.ibm.com \
    --cc=sparse@chrisli.org \
    --cc=suzuki.poulose@arm.com \
    --cc=tbaicar@codeaurora.org \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=vbabka@suse.cz \
    --cc=will.deacon@arm.com \
    --cc=yamada.masahiro@socionext.com \
    --cc=ynorov@caviumnetworks.com \
    --subject='Re: [PATCH v1 15/16] khwasan, mm, arm64: tag non slab memory allocated via pagealloc' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).