LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Baoquan He <firstname.lastname@example.org>
To: Borislav Petkov <email@example.com>
Cc: firstname.lastname@example.org, email@example.com,
"Thomas Gleixner" <firstname.lastname@example.org>,
"Brijesh Singh" <email@example.com>,
"Jérôme Glisse" <firstname.lastname@example.org>,
"Tom Lendacky" <email@example.com>,
"Wei Yang" <firstname.lastname@example.org>
Subject: Re: [PATCH v3 2/3] resource: add walk_system_ram_res_rev()
Date: Thu, 26 Apr 2018 21:22:04 +0800 [thread overview]
Message-ID: <20180426132204.GF19030@localhost.localdomain> (raw)
On 04/26/18 at 01:09pm, Borislav Petkov wrote:
> On Thu, Apr 26, 2018 at 04:56:49PM +0800, Baoquan He wrote:
> > Sorry for that, I just ran scripts/get_maintainer.pl to get expert's
> > name and added them into each patch. The reason this change is made is
> > in patch 3/3. Test robot reported a code bug on the latest kernel, will
> > repost and CC everyone in all patches.
> > Rob Herring asked the same question in v2, I explained to him. The
> > discussion can be found here:
> > https://lkml.org/lkml/2018/4/10/484
> ... and when I open that link, the first paragraph says:
> "This is the explanation I made when Andrew helped to review the v1 post:
> Do you see the absurdity of the link chasing of your explanation?!
> Instead, the explanation *WHY* should be in the commit message of the
> patch - not in mail replies when people ask you about it.
It is in patch 3/3 and cover-letter. I often received one or several
patches of a big patchset. As I said, I used ./scripts/get_maintainer.pl
to each patch's reviewers, and will add all people in CC list.
> Also, do not use lkml.org when referencing a mail on lkml but
> use the Message-ID of the header. We have a proper redirector at
I noticed maintainers merged patches with this Message-ID, could you
tell how to get this Message-ID?
> Now lemme read the reason finally...
> "We need unify these two interfaces on behaviour since they are the same
> on essense from the users' point of view... "
> That's not a good enough reason for me to cause code churn. If the only
> reason is: because the one does it top-down and the other bottom-up, I'm
> not convinced.
We have been using this top-down way for x86 64 since earlier 2013 in
the KEXEC loading with command:
'kexec -l bzImage --initrd initramfs-xx.img'
Two years later, Vivek added a new KEXEC_FILE loading, and most of job
is done in kernel because we need do kernel image sinature verification.
The KEXEC_FILE loading mostly is used in secure boot machine. Although
more and more users like to take secure boot machine, system using KEXEC
loading are still much more. I ever asked our kernel QE who takes care
of kernel and kdump testing, they said in their testing systems, the
proportion of legacy system vs system with secure boot is 10:1.
Before long I pinged hpa, Vivek and Yinghai who discussed and wrote the
code for KEXEC loading in x86 64, asked them about the reaon they chose
top-down, no reply from them. I guess they probably worried that low
memory under 4G are mostly reserved by system or firmware, even though
kexec/kdump have tried very hard to avoid each of them if recognized,
any missed one will cause loading and booting failure. Just a guess.
In fact, there's no spec or doc to tell which one is right or not.
So I finally decide to take the same top-down way for KEXEC_FILE loading
because on statistics KEXEC loading is used longer and wider.
This is not a thing that one is top down, the other is bottom up. For
us, they might be so different on details of code, for customers, they
just think them as a same thing. They may say I just get a new machine,
and still do kexec loading, why these top-down, bottom-up things come
And this is not causing code churn. You can see that by replacing
pointer operation with list_head, code in kernel/resource.c related to
child list iteration is much easier to read, e.g __request_resource(),
__release_resource(). As Rob said in v2 reviewing, they have put
the same replacing in DT struct device_node in todo list, and ACPI tree
function in drivers/acpi/acpica/pstree.c too. I think this is why Andrew
asked to try this way and see what the patches looks like.
next prev parent reply other threads:[~2018-04-26 13:22 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-19 0:18 [PATCH v3 0/3] resource: Use list_head to link sibling resource Baoquan He
2018-04-19 0:18 ` [PATCH v3 1/3] " Baoquan He
2018-04-26 1:18 ` Wei Yang
2018-05-07 1:14 ` Baoquan He
2018-05-08 11:48 ` Wei Yang
2018-05-08 12:11 ` Baoquan He
2018-05-08 23:41 ` Wei Yang
2018-04-26 3:01 ` kbuild test robot
2018-05-06 6:31 ` Baoquan He
2018-04-26 3:23 ` kbuild test robot
2018-05-06 6:30 ` Baoquan He
2018-04-19 0:18 ` [PATCH v3 2/3] resource: add walk_system_ram_res_rev() Baoquan He
2018-04-19 10:07 ` Borislav Petkov
2018-04-26 8:56 ` Baoquan He
2018-04-26 11:09 ` Borislav Petkov
2018-04-26 13:22 ` Baoquan He [this message]
2018-05-04 10:16 ` Borislav Petkov
2018-05-06 6:19 ` Baoquan He
2018-04-19 0:18 ` [PATCH v3 3/3] kexec_file: Load kernel at top of system RAM if required Baoquan He
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='Re: [PATCH v3 2/3] resource: add walk_system_ram_res_rev()' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).