From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: ARC-Seal: i=1; a=rsa-sha256; t=1525356954; cv=none; d=google.com; s=arc-20160816; b=FQFiFEzU4T8CWjQBnhZ8xFTwFiaDyR0zDB8bJdKpNJ3gWlKUiO5BKn1S1FpdXqDF9m lpEtI2aCUKxMpUgylnncj+x5vChjCozlQuJTurL0NVQGyG86YNZYU9JniknP1EysucDk VHontiR1UBCXVdMb90ErTCqhg4SVeGW6Tx4I03M6hBdc9ulPRVpx0ZyU/Xq91/IVw8ag GN+GlNSouSPKxqKaofWC9MFpcrKbfNiAJ5ZpJC+Fzv0OvdaiJCXgU68Z1HMfy9P4jR9i 5kz2osyfmDCIIeCIYXMG07ipz2eVwAeIkeLKTJy2TII5QO895TrmgQWeKey30Ua/2LWc Y81g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature:arc-authentication-results; bh=ctyxioGiNTZ7rpdFbKdyZtiNPRmeyeSFA/d5IrA7HX8=; b=MYxZjc5+UQ+QXwUMvmq4mwDqu2E6BQ6oHEEbIdNulgpDI/KUNiEAsEjG0eAVVxrcoK Nv0xUetccDDva4K3zKJcuQDmg5n54z6I84w5fRqDMMtCwbdadHCZkVGzmwxlpF5rxuWw 7zxX2H9Fxq1TjJj815i2iopZYcITCRBtzk9FOrijZsUqBf2OfUH4pyrrU5kCWLKZkVDy 8ub6pYrLIkOvUeGNmRyBiPgBF7PlMKmS6+rIOWTXA9Wfo2VYAnB0UzWrvSBN9t46WZDL 20XdqY+rRHiOgns7n4V0DX2QoPQMlck0Y5lK1DYz4O4VL3Mvva/BuWOpX/rQfss2Pb3X bwAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=hryGsx/y; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=hryGsx/y; spf=pass (google.com: domain of andreyknvl@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=andreyknvl@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Google-Smtp-Source: AB8JxZpawo+iTN3j3wND3dZiwHsLK73Ji/eDYkcylrk4gUPBNiMOM8Uttu/rSNJkfJl7Xbmq767iZw== From: Andrey Konovalov To: Catalin Marinas , Will Deacon , Jonathan Corbet , Mark Rutland , Robin Murphy , Al Viro , Andrey Konovalov , James Morse , Kees Cook , Bart Van Assche , Kate Stewart , Greg Kroah-Hartman , Thomas Gleixner , Philippe Ombredanne , Andrew Morton , Ingo Molnar , "Kirill A . Shutemov" , Dan Williams , "Aneesh Kumar K . V" , Zi Yan , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Dmitry Vyukov , Kostya Serebryany , Evgeniy Stepanov , Lee Smith , Ramana Radhakrishnan , Jacob Bramley , Ruben Ayrapetyan , Chintan Pandya Subject: [PATCH v2 3/6] arm64: untag user addresses in access_ok and __uaccess_mask_ptr Date: Thu, 3 May 2018 16:15:41 +0200 Message-Id: <20bddb7a15984ba05eb1d248162a845af246449b.1525356769.git.andreyknvl@google.com> X-Mailer: git-send-email 2.17.0.441.gb46fe60e1d-goog In-Reply-To: References: In-Reply-To: References: X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1599452693302357820?= X-GMAIL-MSGID: =?utf-8?q?1599452693302357820?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: copy_from_user (and a few other similar functions) are used to copy data from user memory into the kernel memory or vice versa. Since a user can provided a tagged pointer to one of the syscalls that use copy_from_user, we need to correctly handle such pointers. Do this by untagging user pointers in access_ok and in __uaccess_mask_ptr, before performing access validity checks. Signed-off-by: Andrey Konovalov --- arch/arm64/include/asm/uaccess.h | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index 2d6451cbaa86..fa7318d3d7d5 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -105,7 +105,8 @@ static inline unsigned long __range_ok(const void __user *addr, unsigned long si #define untagged_addr(addr) \ ((__typeof__(addr))sign_extend64((__u64)(addr), 55)) -#define access_ok(type, addr, size) __range_ok(addr, size) +#define access_ok(type, addr, size) \ + __range_ok(untagged_addr(addr), size) #define user_addr_max get_fs #define _ASM_EXTABLE(from, to) \ @@ -237,7 +238,8 @@ static inline void uaccess_enable_not_uao(void) /* * Sanitise a uaccess pointer such that it becomes NULL if above the - * current addr_limit. + * current addr_limit. In case the pointer is tagged (has the top byte set), + * untag the pointer before checking. */ #define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr) static inline void __user *__uaccess_mask_ptr(const void __user *ptr) @@ -245,10 +247,11 @@ static inline void __user *__uaccess_mask_ptr(const void __user *ptr) void __user *safe_ptr; asm volatile( - " bics xzr, %1, %2\n" + " bics xzr, %3, %2\n" " csel %0, %1, xzr, eq\n" : "=&r" (safe_ptr) - : "r" (ptr), "r" (current_thread_info()->addr_limit) + : "r" (ptr), "r" (current_thread_info()->addr_limit), + "r" (untagged_addr(ptr)) : "cc"); csdb(); -- 2.17.0.441.gb46fe60e1d-goog