LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [Patch v3 0/7] randomize kernel physical address and virtual address separately
@ 2015-03-08 5:38 Baoquan He
2015-03-08 5:38 ` [Patch v3 1/7] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
` (6 more replies)
0 siblings, 7 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
Currently kaslr only randomize physical address of kernel loading, then add
the delta to virtual address of kernel text mapping. Because kernel virtual
address can only be from __START_KERNEL_map to
LOAD_PHYSICAL_ADDR+CONFIG_RANDOMIZE_BASE_MAX_OFFSET,
namely [0xffffffff80000000, 0xffffffffc0000000], so physical address can only
be randomized in region [LOAD_PHYSICAL_ADDR, CONFIG_RANDOMIZE_BASE_MAX_OFFSET],
namely [16M, 1G].
So hpa and Vivek suggested the randomization should be done separately for both
physical and virtual address. In this patchset the behavior is changed. Randomize
both the physical address where kernel is decompressed and the virtual address
where kernel text is mapped. And physical address can be randomized from where
vmlinux was linked to load to maximum physical memory, possibly near 64T. While
virtual address can get a random offset from load address to
CONFIG_RANDOMIZE_BASE_MAX_OFFSET, then added to __START_KERNEL_map. And relocation
handling only depends on virtual address randomization. Means if and only if
virtual address is randomized to a different value, add the delta to the offset
of kernel relocs.
This patchset depends on Yinghai's patch set which makes kaslr clean up
and build ident mapping to make kernel be able to be reloaded above 4G.
People can get his related patchset in below link, then apply this patchset
on top of it for testing and reviewing.
https://git.kernel.org/cgit/linux/kernel/git/yinghai/linux-yinghai.git/log/?h=for-x86-4.0-rc2-aslr
or
git://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-x86-4.0-rc2-aslr
Issue need be discussed:
Currently kaslr only randomize from the original address where bootloader
put kernel. However some bootloaders may put kernel in a very high address.
Then possibly a very huge region below that address won't be calculated
and added as candidate slots. Yinghai uses a patched grub2 to do this
right now, put kernel near 5G. And he suggested moving down random base
to 512M when it's bigger than this value. But 512M seems a little arbitrary
because I can't see its theoretical foundation. I am wondering how about
16M since Yinghai said currently all related data in boot stage has been
avoided safely. Any opinion?
v1->v2:
Thanks to Yinghai's patch which make kernel be able to load above 4G in boot stage,
physical address can be randomized anywhere, even near 64T.
v2>v3:
Rearrange patchset to make it bisect-able according to Yinghai's comment.
Fix two code bugs in process_e820_entry and slots_fetch_random.
Baoquan He (7):
x86, kaslr: Fix a bug that relocation can not be handled when kernel
is loaded above 2G
x86, kaslr: Introduce struct slot_area to manage randomization slot
info
x86, kaslr: Add two functions which will be used later
x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel
text mapping address
x86, kaslr: Randomize physical and virtual address of kernel
separately
x86, kaslr: Add support of kernel physical address randomization above
4G
x86, kaslr: Remove useless codes
arch/x86/boot/compressed/aslr.c | 215 +++++++++++++++++++++++++++++-----------
arch/x86/boot/compressed/misc.c | 41 +++++---
arch/x86/boot/compressed/misc.h | 24 +++--
3 files changed, 194 insertions(+), 86 deletions(-)
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 1/7] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 2/7] x86, kaslr: Introduce struct slot_area to manage randomization slot info Baoquan He
` (5 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
When process 32 bit relocation tables a local variable extended is
defined to calculate the physical address of relocs entry. However
it's type is int which is enough for i386, for x86_64 not enough.
That's why relocation can only be handled when kernel is loaded
below 2G, otherwise a overflow will happen and cause system hang.
Here change it to long as 32 bit inverse relocation processing does,
and this change is safe for i386 relocation handling too.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/misc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 51e9e54..f3ca33e 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -278,7 +278,7 @@ static void handle_relocations(void *output, unsigned long output_len)
* So we work backwards from the end of the decompressed image.
*/
for (reloc = output + output_len - sizeof(*reloc); *reloc; reloc--) {
- int extended = *reloc;
+ long extended = *reloc;
extended += map;
ptr = (unsigned long)extended;
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 2/7] x86, kaslr: Introduce struct slot_area to manage randomization slot info
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
2015-03-08 5:38 ` [Patch v3 1/7] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 3/7] x86, kaslr: Add two functions which will be used later Baoquan He
` (4 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
Kernel is expected to be randomly reloaded anywhere in the whole
physical memory area, it could be near 64T at most. In this case
there could be about 4*1024*1024 randomization slots. Hence the
old slot array will cost too much memory and also not efficient
to store the slot information one by one into slot array.
Here introduce struct slot_area to manage randomization slot info
in one contiguous memory area excluding the avoid area. slot_areas
is used to store all slot area info. Since setup_data is a linked
list, could contain many datas by pointer to point one by one,
excluding them will split RAM memory into many smaller areas, here
only take the first 100 slot areas if too many of them.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 34eb652..c452a60 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -232,8 +232,20 @@ static bool mem_avoid_overlap(struct mem_vector *img)
static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
CONFIG_PHYSICAL_ALIGN];
+
+struct slot_area {
+ unsigned long addr;
+ int num;
+};
+
+#define MAX_SLOT_AREA 100
+
+static struct slot_area slot_areas[MAX_SLOT_AREA];
+
static unsigned long slot_max;
+static unsigned long slot_area_index;
+
static void slots_append(unsigned long addr)
{
/* Overflowing the slots list should be impossible. */
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 3/7] x86, kaslr: Add two functions which will be used later
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
2015-03-08 5:38 ` [Patch v3 1/7] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
2015-03-08 5:38 ` [Patch v3 2/7] x86, kaslr: Introduce struct slot_area to manage randomization slot info Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 13:58 ` Alexander Kuleshov
2015-03-08 5:38 ` [Patch v3 4/7] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address Baoquan He
` (3 subsequent siblings)
6 siblings, 1 reply; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
Add two functions mem_min_overlap() and store_slot_info() which will be
used later.
Given a memory region mem_min_overlap will iterate all avoid region to
find the first one which overlap with it.
store_slot_info() calculates the slot info of passed in region and
store it into slot_areas[].
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 51 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 51 insertions(+)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index c452a60..afb1641 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -230,6 +230,40 @@ static bool mem_avoid_overlap(struct mem_vector *img)
return false;
}
+static unsigned long
+mem_min_overlap(struct mem_vector *img, struct mem_vector *out)
+{
+ int i;
+ struct setup_data *ptr;
+ unsigned long min = img->start + img->size;
+
+ for (i = 0; i < MEM_AVOID_MAX; i++) {
+ if (mem_overlaps(img, &mem_avoid[i]) &&
+ (mem_avoid[i].start < min)) {
+ *out = mem_avoid[i];
+ min = mem_avoid[i].start;
+ }
+ }
+
+ /* Check all entries in the setup_data linked list. */
+ ptr = (struct setup_data *)(unsigned long)real_mode->hdr.setup_data;
+ while (ptr) {
+ struct mem_vector avoid;
+
+ avoid.start = (unsigned long)ptr;
+ avoid.size = sizeof(*ptr) + ptr->len;
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < min)) {
+ *out = avoid;
+ min = avoid.start;
+ }
+
+ ptr = (struct setup_data *)(unsigned long)ptr->next;
+ }
+
+ return min;
+}
+
static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
CONFIG_PHYSICAL_ALIGN];
@@ -246,6 +280,23 @@ static unsigned long slot_max;
static unsigned long slot_area_index;
+static void store_slot_info(struct mem_vector *region, unsigned long image_size)
+{
+ struct slot_area slot_area;
+
+ slot_area.addr = region->start;
+ if (image_size <= CONFIG_PHYSICAL_ALIGN)
+ slot_area.num = region->size / CONFIG_PHYSICAL_ALIGN;
+ else
+ slot_area.num = (region->size - image_size) /
+ CONFIG_PHYSICAL_ALIGN + 1;
+
+ if (slot_area.num > 0) {
+ slot_areas[slot_area_index++] = slot_area;
+ slot_max += slot_area.num;
+ }
+}
+
static void slots_append(unsigned long addr)
{
/* Overflowing the slots list should be impossible. */
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 4/7] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
` (2 preceding siblings ...)
2015-03-08 5:38 ` [Patch v3 3/7] x86, kaslr: Add two functions which will be used later Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 5/7] x86, kaslr: Randomize physical and virtual address of kernel separately Baoquan He
` (2 subsequent siblings)
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
Kaslr extended kernel text mapping region size from 512M to 1G,
namely CONFIG_RANDOMIZE_BASE_MAX_OFFSET. This means kernel text
can be mapped to below region:
[__START_KERNEL_map + LOAD_PHYSICAL_ADDR, __START_KERNEL_map + 1G]
Introduce a function find_random_virt_offset() to get random value
between LOAD_PHYSICAL_ADDR and CONFIG_RANDOMIZE_BASE_MAX_OFFSET.
This random value will be added to __START_KERNEL_map to get the
starting address which kernel text is mapped from. Since slot can
be anywhere of this region, means it is an independent slot_area,
it is simple to get a slot according to random value.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index afb1641..0ca4c5b 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -382,6 +382,28 @@ static unsigned long find_random_addr(unsigned long minimum,
return slots_fetch_random();
}
+static unsigned long find_random_virt_offset(unsigned long minimum,
+ unsigned long image_size)
+{
+ unsigned long slot_num, random;
+
+ /* Make sure minimum is aligned. */
+ minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
+
+ if (image_size <= CONFIG_PHYSICAL_ALIGN)
+ slot_num = (CONFIG_RANDOMIZE_BASE_MAX_OFFSET - minimum) /
+ CONFIG_PHYSICAL_ALIGN;
+ else
+ slot_num = (CONFIG_RANDOMIZE_BASE_MAX_OFFSET -
+ minimum - image_size) /
+ CONFIG_PHYSICAL_ALIGN + 1;
+
+ random = get_random_long() % slot_num;
+
+ return random * CONFIG_PHYSICAL_ALIGN + minimum;
+}
+
+
static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
{
struct setup_data *data;
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 5/7] x86, kaslr: Randomize physical and virtual address of kernel separately
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
` (3 preceding siblings ...)
2015-03-08 5:38 ` [Patch v3 4/7] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 6/7] x86, kaslr: Add support of kernel physical address randomization above 4G Baoquan He
2015-03-08 5:38 ` [Patch v3 7/7] x86, kaslr: Remove useless codes Baoquan He
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
On x86_64, in old kaslr implementaion only physical address of kernel
loading is randomized. Then calculate the delta of physical address
where vmlinux was linked to load and where it is finally loaded. If
delta is not equal to 0, namely there's a new physical address where
kernel is actually decompressed, relocation handling need be done. Then
delta is added to offset of kernel symbol relocation, this makes the
address of kernel text mapping move delta long.
Here the behavior is changed. Randomize both the physical address
where kernel is decompressed and the virtual address where kernel text
is mapped. And relocation handling only depends on virtual address
randomization. Means if and only if virtual address is randomized to
a different value, we add the delta to the offset of kernel relocs.
Note that up to now both virtual offset and physical addr randomization
cann't exceed CONFIG_RANDOMIZE_BASE_MAX_OFFSET, namely 1G.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 48 +++++++++++++++++++++--------------------
arch/x86/boot/compressed/misc.c | 39 ++++++++++++++++++++-------------
arch/x86/boot/compressed/misc.h | 24 +++++++++++----------
3 files changed, 62 insertions(+), 49 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 0ca4c5b..5f9f174 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -365,7 +365,7 @@ static void process_e820_entry(struct e820entry *entry,
}
}
-static unsigned long find_random_addr(unsigned long minimum,
+static unsigned long find_random_phy_addr(unsigned long minimum,
unsigned long size)
{
int i;
@@ -425,49 +425,51 @@ static void add_kaslr_setup_data(struct boot_params *params, __u8 enabled)
}
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long init_size)
+void choose_kernel_location(struct boot_params *params,
+ unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long init_size,
+ unsigned char **virt_offset)
{
- unsigned long choice = (unsigned long)output;
unsigned long random;
+ *virt_offset = (unsigned char *)LOAD_PHYSICAL_ADDR;
#ifdef CONFIG_HIBERNATION
if (!cmdline_find_option_bool("kaslr")) {
debug_putstr("KASLR disabled by default...\n");
add_kaslr_setup_data(params, 0);
- goto out;
+ return;
}
#else
if (cmdline_find_option_bool("nokaslr")) {
debug_putstr("KASLR disabled by cmdline...\n");
add_kaslr_setup_data(params, 0);
- goto out;
+ return;
}
#endif
add_kaslr_setup_data(params, 1);
/* Record the various known unsafe memory ranges. */
mem_avoid_init((unsigned long)input, input_size,
- (unsigned long)output, init_size);
+ (unsigned long)*output, init_size);
/* Walk e820 and find a random address. */
- random = find_random_addr(choice, init_size);
- if (!random) {
+ random = find_random_phy_addr((unsigned long)*output, init_size);
+ if (!random)
debug_putstr("KASLR could not find suitable E820 region...\n");
- goto out;
+ else {
+ if ((unsigned long)*output != random) {
+ fill_pagetable(random, init_size);
+ switch_pagetable();
+ *output = (unsigned char *)random;
+ }
}
- /* Always enforce the minimum. */
- if (random < choice)
- goto out;
-
- choice = random;
-
- fill_pagetable(choice, init_size);
- switch_pagetable();
-out:
- return (unsigned char *)choice;
+ /*
+ * Get a random address between LOAD_PHYSICAL_ADDR and
+ * CONFIG_RANDOMIZE_BASE_MAX_OFFSET
+ */
+ random = find_random_virt_offset(LOAD_PHYSICAL_ADDR, init_size);
+ *virt_offset = (unsigned char *)random;
}
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index f3ca33e..78380af 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -231,7 +231,8 @@ static void error(char *x)
}
#if CONFIG_X86_NEED_RELOCS
-static void handle_relocations(void *output, unsigned long output_len)
+static void handle_relocations(void *output, unsigned long output_len,
+ void *virt_offset)
{
int *reloc;
unsigned long delta, map, ptr;
@@ -243,11 +244,6 @@ static void handle_relocations(void *output, unsigned long output_len)
* and where it was actually loaded.
*/
delta = min_addr - LOAD_PHYSICAL_ADDR;
- if (!delta) {
- debug_putstr("No relocation needed... ");
- return;
- }
- debug_putstr("Performing relocations... ");
/*
* The kernel contains a table of relocation addresses. Those
@@ -258,6 +254,22 @@ static void handle_relocations(void *output, unsigned long output_len)
*/
map = delta - __START_KERNEL_map;
+
+
+ /*
+ * 32-bit always performs relocations. 64-bit relocations are only
+ * needed if kASLR has chosen a different starting address offset
+ * from __START_KERNEL_map.
+ */
+ if (IS_ENABLED(CONFIG_X86_64))
+ delta = (unsigned long)virt_offset - LOAD_PHYSICAL_ADDR;
+
+ if (!delta) {
+ debug_putstr("No relocation needed... ");
+ return;
+ }
+ debug_putstr("Performing relocations... ");
+
/*
* Process relocations: 32 bit relocations first then 64 bit after.
* Three sets of binary relocations are added to the end of the kernel
@@ -311,7 +323,8 @@ static void handle_relocations(void *output, unsigned long output_len)
#endif
}
#else
-static inline void handle_relocations(void *output, unsigned long output_len)
+static inline void handle_relocations(void *output, unsigned long output_len,
+ void *virt_offset)
{ }
#endif
@@ -374,6 +387,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
{
unsigned char *output_orig = output;
unsigned long init_size;
+ unsigned char *virt_offset;
real_mode = rmode;
@@ -402,8 +416,8 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
* The memory hole needed for the kernel is init_size for running
* and init_size is bigger than output_len always.
*/
- output = choose_kernel_location(real_mode, input_data, input_len,
- output, init_size);
+ choose_kernel_location(real_mode, input_data, input_len,
+ &output, init_size, &virt_offset);
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
@@ -423,12 +437,7 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
debug_putstr("\nDecompressing Linux... ");
decompress(input_data, input_len, NULL, NULL, output, NULL, error);
parse_elf(output);
- /*
- * 32-bit always performs relocations. 64-bit relocations are only
- * needed if kASLR has chosen a different load address.
- */
- if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig)
- handle_relocations(output, output_len);
+ handle_relocations(output, output_len, virt_offset);
debug_putstr("done.\nBooting the kernel.\n");
return output;
}
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 23156e7..fe93bc4 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -57,22 +57,24 @@ int cmdline_find_option_bool(const char *option);
#if CONFIG_RANDOMIZE_BASE
/* aslr.c */
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long init_size);
+void choose_kernel_location(struct boot_params *params,
+ unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long init_size,
+ unsigned char **virt_offset);
/* cpuflags.c */
bool has_cpuflag(int flag);
#else
static inline
-unsigned char *choose_kernel_location(struct boot_params *params,
- unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long init_size)
+void choose_kernel_location(struct boot_params *params,
+ unsigned char *input,
+ unsigned long input_size,
+ unsigned char **output,
+ unsigned long init_size,
+ unsigned char **virt_offset)
{
- return output;
+ return;
}
#endif
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 6/7] x86, kaslr: Add support of kernel physical address randomization above 4G
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
` (4 preceding siblings ...)
2015-03-08 5:38 ` [Patch v3 5/7] x86, kaslr: Randomize physical and virtual address of kernel separately Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 7/7] x86, kaslr: Remove useless codes Baoquan He
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
In kaslr implementation mechanism, mainly process_e820_entry and
slots_fetch_random do the job. process_e820_entry is responsible
for storing the slot information. slots_fetch_random takes care
of fetching slot information. In this patch, for adding support
of kernel physical address randomization above 4G, both of these
two functions are changed based on the new slot_area data structure.
Now kernel can be reloaded and decompressed anywhere of the whole
physical memory, even near 64T at most.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 68 ++++++++++++++++++++++++++++++-----------
1 file changed, 51 insertions(+), 17 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 5f9f174..2cf8f82 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -309,27 +309,40 @@ static void slots_append(unsigned long addr)
static unsigned long slots_fetch_random(void)
{
+ unsigned long random;
+ int i;
+
/* Handle case of no slots stored. */
if (slot_max == 0)
return 0;
- return slots[get_random_long() % slot_max];
+ random = get_random_long() % slot_max;
+
+ for (i = 0; i < slot_area_index; i++) {
+ if (random >= slot_areas[i].num) {
+ random -= slot_areas[i].num;
+ continue;
+ }
+ return slot_areas[i].addr + random * CONFIG_PHYSICAL_ALIGN;
+ }
+
+ if (i == slot_area_index)
+ debug_putstr("Something wrong happened in slots_fetch_random()...\n");
+ return 0;
}
static void process_e820_entry(struct e820entry *entry,
unsigned long minimum,
unsigned long image_size)
{
- struct mem_vector region, img;
+ struct mem_vector region, out;
+ struct slot_area slot_area;
+ unsigned long min, start_orig;
/* Skip non-RAM entries. */
if (entry->type != E820_RAM)
return;
- /* Ignore entries entirely above our maximum. */
- if (entry->addr >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
- return;
-
/* Ignore entries entirely below our minimum. */
if (entry->addr + entry->size < minimum)
return;
@@ -337,10 +350,17 @@ static void process_e820_entry(struct e820entry *entry,
region.start = entry->addr;
region.size = entry->size;
+repeat:
+ start_orig = region.start;
+
/* Potentially raise address to minimum location. */
if (region.start < minimum)
region.start = minimum;
+ /* Return if slot area array is full */
+ if (slot_area_index == MAX_SLOT_AREA)
+ return;
+
/* Potentially raise address to meet alignment requirements. */
region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
@@ -349,20 +369,30 @@ static void process_e820_entry(struct e820entry *entry,
return;
/* Reduce size by any delta from the original address. */
- region.size -= region.start - entry->addr;
+ region.size -= region.start - start_orig;
- /* Reduce maximum size to fit end of image within maximum limit. */
- if (region.start + region.size > CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
- region.size = CONFIG_RANDOMIZE_BASE_MAX_OFFSET - region.start;
+ /* Return if region can't contain decompressed kernel */
+ if (region.size < image_size)
+ return;
- /* Walk each aligned slot and check for avoided areas. */
- for (img.start = region.start, img.size = image_size ;
- mem_contains(®ion, &img) ;
- img.start += CONFIG_PHYSICAL_ALIGN) {
- if (mem_avoid_overlap(&img))
- continue;
- slots_append(img.start);
+ if (!mem_avoid_overlap(®ion)) {
+ store_slot_info(®ion, image_size);
+ return;
}
+
+ min = mem_min_overlap(®ion, &out);
+
+ if (min > region.start + image_size) {
+ struct mem_vector tmp;
+
+ tmp.start = region.start;
+ tmp.size = min - region.start;
+ store_slot_info(&tmp, image_size);
+ }
+
+ region.size -= out.start - region.start + out.size;
+ region.start = out.start + out.size;
+ goto repeat;
}
static unsigned long find_random_phy_addr(unsigned long minimum,
@@ -377,6 +407,10 @@ static unsigned long find_random_phy_addr(unsigned long minimum,
/* Verify potential e820 positions, appending to slots list. */
for (i = 0; i < real_mode->e820_entries; i++) {
process_e820_entry(&real_mode->e820_map[i], minimum, size);
+ if (slot_area_index == MAX_SLOT_AREA) {
+ debug_putstr("Stop processing e820 since slot_areas is full...\n");
+ break;
+ }
}
return slots_fetch_random();
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Patch v3 7/7] x86, kaslr: Remove useless codes
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
` (5 preceding siblings ...)
2015-03-08 5:38 ` [Patch v3 6/7] x86, kaslr: Add support of kernel physical address randomization above 4G Baoquan He
@ 2015-03-08 5:38 ` Baoquan He
6 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-08 5:38 UTC (permalink / raw)
To: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx
Cc: linux-kernel, Baoquan He
Several auxiliary functions and slots[] are not needed any more since
struct slot_area is used to store the slot info of kaslr now. Hence
remove them in this patch.
Signed-off-by: Baoquan He <bhe@redhat.com>
---
arch/x86/boot/compressed/aslr.c | 24 ------------------------
1 file changed, 24 deletions(-)
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
index 2cf8f82..ce6ee28 100644
--- a/arch/x86/boot/compressed/aslr.c
+++ b/arch/x86/boot/compressed/aslr.c
@@ -126,17 +126,6 @@ struct mem_vector {
#define MEM_AVOID_MAX 4
static struct mem_vector mem_avoid[MEM_AVOID_MAX];
-static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
-{
- /* Item at least partially before region. */
- if (item->start < region->start)
- return false;
- /* Item at least partially after region. */
- if (item->start + item->size > region->start + region->size)
- return false;
- return true;
-}
-
static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
{
/* Item one is entirely before item two. */
@@ -264,9 +253,6 @@ mem_min_overlap(struct mem_vector *img, struct mem_vector *out)
return min;
}
-static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
- CONFIG_PHYSICAL_ALIGN];
-
struct slot_area {
unsigned long addr;
int num;
@@ -297,16 +283,6 @@ static void store_slot_info(struct mem_vector *region, unsigned long image_size)
}
}
-static void slots_append(unsigned long addr)
-{
- /* Overflowing the slots list should be impossible. */
- if (slot_max >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
- CONFIG_PHYSICAL_ALIGN)
- return;
-
- slots[slot_max++] = addr;
-}
-
static unsigned long slots_fetch_random(void)
{
unsigned long random;
--
1.9.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Patch v3 3/7] x86, kaslr: Add two functions which will be used later
2015-03-08 5:38 ` [Patch v3 3/7] x86, kaslr: Add two functions which will be used later Baoquan He
@ 2015-03-08 13:58 ` Alexander Kuleshov
2015-03-09 12:40 ` Baoquan He
0 siblings, 1 reply; 10+ messages in thread
From: Alexander Kuleshov @ 2015-03-08 13:58 UTC (permalink / raw)
To: Baoquan He
Cc: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx,
linux-kernel
Hello,
May be necessary to add comments that they will be used later, to
prevent patches like 'removed unused......'
2015-03-08 11:38 GMT+06:00 Baoquan He <bhe@redhat.com>:
>
> Add two functions mem_min_overlap() and store_slot_info() which will be
> used later.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Patch v3 3/7] x86, kaslr: Add two functions which will be used later
2015-03-08 13:58 ` Alexander Kuleshov
@ 2015-03-09 12:40 ` Baoquan He
0 siblings, 0 replies; 10+ messages in thread
From: Baoquan He @ 2015-03-09 12:40 UTC (permalink / raw)
To: Alexander Kuleshov
Cc: keescook, yinghai, hpa, mingo, bp, matt.fleming, vgoyal, tglx,
linux-kernel
On 03/08/15 at 07:58pm, Alexander Kuleshov wrote:
> Hello,
>
> May be necessary to add comments that they will be used later, to
> prevent patches like 'removed unused......'
Hi,
Thanks for your comment. Do you mean adding comment above function
definition, later remove the comment when call it?
Thanks
Baoquan
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2015-03-09 12:40 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-08 5:38 [Patch v3 0/7] randomize kernel physical address and virtual address separately Baoquan He
2015-03-08 5:38 ` [Patch v3 1/7] x86, kaslr: Fix a bug that relocation can not be handled when kernel is loaded above 2G Baoquan He
2015-03-08 5:38 ` [Patch v3 2/7] x86, kaslr: Introduce struct slot_area to manage randomization slot info Baoquan He
2015-03-08 5:38 ` [Patch v3 3/7] x86, kaslr: Add two functions which will be used later Baoquan He
2015-03-08 13:58 ` Alexander Kuleshov
2015-03-09 12:40 ` Baoquan He
2015-03-08 5:38 ` [Patch v3 4/7] x86, kaslr: Introduce fetch_random_virt_offset to randomize the kernel text mapping address Baoquan He
2015-03-08 5:38 ` [Patch v3 5/7] x86, kaslr: Randomize physical and virtual address of kernel separately Baoquan He
2015-03-08 5:38 ` [Patch v3 6/7] x86, kaslr: Add support of kernel physical address randomization above 4G Baoquan He
2015-03-08 5:38 ` [Patch v3 7/7] x86, kaslr: Remove useless codes Baoquan He
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).