LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Vitaly Kuznetsov <firstname.lastname@example.org>
To: Maxim Levitsky <email@example.com>
Cc: firstname.lastname@example.org, Paolo Bonzini <email@example.com>,
Sean Christopherson <firstname.lastname@example.org>,
Wanpeng Li <email@example.com>,
Jim Mattson <firstname.lastname@example.org>,
"Dr. David Alan Gilbert" <email@example.com>,
Nitesh Narayan Lal <firstname.lastname@example.org>,
Eduardo Habkost <email@example.com>
Subject: Re: [PATCH v2 4/4] KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect()
Date: Wed, 25 Aug 2021 10:21:04 +0200 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
Maxim Levitsky <email@example.com> writes:
> On Tue, 2021-08-24 at 16:42 +0200, Vitaly Kuznetsov wrote:
> Not a classical review but,
> I did some digital archaeology with this one, trying to understand what is going on:
> I think that 16 bit vcpu bitmap is due to the fact that IOAPIC spec states that
> it can address up to 16 cpus in physical destination mode.
> In logical destination mode, assuming flat addressing and that logical id = 1 << physical id
> which KVM hardcodes, it is also only possible to address 8 CPUs.
> However(!) in flat cluster mode, the logical apic id is split in two.
> We have 16 clusters and each have 4 CPUs, so it is possible to address 64 CPUs,
> and unlike the logical ID, the KVM does honour cluster ID,
> thus one can stick say cluster ID 0 to any vCPU.
> Let's look at ioapic_write_indirect.
> It does:
> -> bitmap_zero(&vcpu_bitmap, 16);
> -> kvm_bitmap_or_dest_vcpus(ioapic->kvm, &irq, &vcpu_bitmap);
> -> kvm_make_scan_ioapic_request_mask(ioapic->kvm, &vcpu_bitmap); // use of the above bitmap
> When we call kvm_bitmap_or_dest_vcpus, we can already overflow the bitmap,
> since we pass all 8 bit of the destination even when it is physical.
> Lets examine the kvm_bitmap_or_dest_vcpus:
> -> It calls the kvm_apic_map_get_dest_lapic which
> -> for physical destinations, it just sets the bitmap, which can overflow
> if we pass it 8 bit destination (which basically includes reserved bits + 4 bit destination).
> -> For logical apic ID, it seems to truncate the result to 16 bit, which isn't correct as I explained
> above, but should not overflow the result.
> -> If call to kvm_apic_map_get_dest_lapic fails, it goes over all vcpus and tries to match the destination
> This can overflow as well.
> I also don't like that ioapic_write_indirect calls the kvm_bitmap_or_dest_vcpus twice,
> and second time with 'old_dest_id'
> I am not 100% sure why old_dest_id/old_dest_mode are needed as I don't see anything in the
> function changing them.
> I think only the guest can change them, so maybe the code deals with the guest changing them
> while the code is running from a different vcpu?
> The commit that introduced this code is 7ee30bc132c683d06a6d9e360e39e483e3990708
> Nitesh Narayan Lal, maybe you remember something about it?
Before posting this patch I've contacted Nitesh privately, he's
currently on vacation but will take a look when he gets back.
> Also I worry a lot about other callers of kvm_apic_map_get_dest_lapic
> It is also called from kvm_irq_delivery_to_apic_fast, and from kvm_intr_is_single_vcpu_fast
> and both seem to also use 'unsigned long' for bitmap, and then only use 16 bits of it.
> I haven't dug into them, but these don't seem to be IOAPIC related and I think
> can overwrite the stack as well.
I'm no expert in this code but when writing the patch I somehow
convinced myself that a single unsigned long is always enough. I think
that for cluster mode 'bitmap' needs 64-bits (and it is *not* a
vcpu_bitmap, we need to convert). I may be completely wrong of course
but in any case this is a different issue. In ioapic_write_indirect() we
have 'vcpu_bitmap' which should certainly be longer than 64 bits.
next prev parent reply other threads:[~2021-08-25 8:21 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-23 14:30 [PATCH v2 0/4] KVM: Various fixes and improvements around kicking vCPUs Vitaly Kuznetsov
2021-08-23 14:30 ` [PATCH v2 1/4] KVM: Clean up benign vcpu->cpu data races when " Vitaly Kuznetsov
2021-08-23 14:30 ` [PATCH v2 2/4] KVM: Guard cpusmask NULL check with CONFIG_CPUMASK_OFFSTACK Vitaly Kuznetsov
2021-08-23 14:30 ` [PATCH v2 3/4] KVM: Optimize kvm_make_vcpus_request_mask() a bit Vitaly Kuznetsov
2021-08-23 14:30 ` [PATCH v2 4/4] KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect() Vitaly Kuznetsov
2021-08-23 18:58 ` Eduardo Habkost
2021-08-24 7:13 ` Vitaly Kuznetsov
2021-08-24 14:23 ` Eduardo Habkost
2021-08-24 14:42 ` Vitaly Kuznetsov
2021-08-24 16:07 ` Maxim Levitsky
2021-08-24 17:40 ` Sean Christopherson
2021-08-25 8:26 ` Vitaly Kuznetsov
2021-08-25 8:21 ` Vitaly Kuznetsov [this message]
2021-08-25 9:11 ` Maxim Levitsky
2021-08-25 9:43 ` Vitaly Kuznetsov
2021-08-25 10:41 ` Maxim Levitsky
2021-08-25 13:19 ` Eduardo Habkost
2021-08-26 12:40 ` Vitaly Kuznetsov
2021-08-26 14:52 ` Eduardo Habkost
2021-08-26 18:01 ` Sean Christopherson
2021-08-26 18:13 ` Eduardo Habkost
2021-08-26 19:27 ` Eduardo Habkost
2021-08-30 19:47 ` Nitesh Lal
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='Re: [PATCH v2 4/4] KVM: x86: Fix stack-out-of-bounds memory access from ioapic_write_indirect()' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).