LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Steven Rostedt <firstname.lastname@example.org>
To: email@example.com (Eric W. Biederman)
Cc: "Yordan Karadzhov \(VMware\)" <firstname.lastname@example.org>,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, Linux Containers <email@example.com>
Subject: Re: [RFC PATCH 0/4] namespacefs: Proof-of-Concept
Date: Thu, 18 Nov 2021 14:36:34 -0500 [thread overview]
Message-ID: <firstname.lastname@example.org> (raw)
On Thu, 18 Nov 2021 13:22:16 -0600
email@example.com (Eric W. Biederman) wrote:
> Steven Rostedt <firstname.lastname@example.org> writes:
> I am refreshing my nack on the concept. My nack has been in place for
> good technical reasons since about 2006.
I'll admit, we are new to this, as we are now trying to add more visibility
into the workings of things like kubernetes. And having a way of knowing
what containers are running and how to monitor them is needed, and we need
to do this for all container infrastructures.
> I see no way forward. I do not see a compelling use case.
What do you use to debug issues in a kubernetes cluster of hundreds of
machines running thousands of containers? Currently, if something is amiss,
a node is restarted in the hopes that the issue does not appear again. But
we would like to add infrastructure that takes advantage of tracing and
profiling to be able to narrow that down. But to do so, we need to
understand what tasks belong to what containers.
> There have been many conversations in the past attempt to implement
> something that requires a namespace of namespaces and they have never
> gotten anywhere.
We are not asking about a "namespace" of namespaces, but a filesystem (one,
not a namespace of one), that holds the information at the system scale,
not a container view.
I would be happy to implement something that makes a container having this
file system available "special" as most containers do not need this.
> I see no attempt a due diligence or of actually understanding what
> hierarchy already exists in namespaces.
This is not trivial. What did we miss?
> I don't mean to be nasty but I do mean to be clear. Without a
> compelling new idea in this space I see no hope of an implementation.
> What they are attempting to do makes it impossible to migrate a set of
> process that uses this feature from one machine to another. AKA this
> would be a breaking change and a regression if merged.
The point of this is not to allow that migration. I'd be happy to add that
if a container has access to this file system, it is pinned to the system
and can not be migrated. The whole point of this file system is to monitor
all containers no the system, and it makes no sense in migrating it.
We would duplicate it over several systems, but there's no reason to move
it once it is running.
> The breaking and regression are caused by assigning names to namespaces
> without putting those names into a namespace of their own. That
> appears fundamental to the concept not to the implementation.
If you think this should be migrated then yes, it is broken. But we don't
want this to work across migrations. That defeats the purpose of this work.
> Since the concept if merged would cause a regression it qualifies for
> a nack.
> We can explore what problems they are trying to solve with this and
> explore other ways to solve those problems. All I saw was a comment
> about monitoring tools and wanting a global view. I did not see
> any comments about dealing with all of the reasons why a global view
> tends to be a bad idea.
If you only care about a working environment of the system that runs a set
of containers, how is that a bad idea. Again, I'm happy with implementing
something that makes having this file system prevent it from being
migrated. A pinned privileged container.
> I should have added that we have to some extent a way to walk through
> namespaces using ioctls on nsfs inodes.
How robust is this? And is there a library or tooling around it?
next prev parent reply other threads:[~2021-11-18 19:36 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-18 18:12 Yordan Karadzhov (VMware)
2021-11-18 18:12 ` [RFC PATCH 1/4] namespacefs: Introduce 'namespacefs' Yordan Karadzhov (VMware)
2021-11-18 18:12 ` [RFC PATCH 2/4] namespacefs: Add methods to create/remove PID namespace directories Yordan Karadzhov (VMware)
2021-11-18 18:12 ` [RFC PATCH 3/4] namespacefs: Couple namespacefs to the PID namespace Yordan Karadzhov (VMware)
2021-11-18 18:12 ` [RFC PATCH 4/4] namespacefs: Couple namespacefs to the UTS namespace Yordan Karadzhov (VMware)
2021-11-18 18:55 ` [RFC PATCH 0/4] namespacefs: Proof-of-Concept Eric W. Biederman
2021-11-18 19:02 ` Steven Rostedt
2021-11-18 19:22 ` Eric W. Biederman
2021-11-18 19:36 ` Steven Rostedt [this message]
2021-11-18 19:24 ` Steven Rostedt
2021-11-19 9:50 ` Kirill Tkhai
2021-11-19 12:45 ` James Bottomley
[not found] ` <email@example.com>
2021-11-19 16:42 ` James Bottomley
2021-11-19 17:14 ` Yordan Karadzhov
2021-11-19 17:22 ` Steven Rostedt
2021-11-19 23:22 ` James Bottomley
2021-11-20 0:07 ` Steven Rostedt
2021-11-20 0:14 ` James Bottomley
[not found] ` <f6ca1f5bdb3b516688f291d9685a6a59f49f1393.camel@HansenPartnership.com>
2021-11-19 16:47 ` Steven Rostedt
2021-11-19 16:49 ` Steven Rostedt
2021-11-19 23:08 ` James Bottomley
2021-11-22 13:02 ` Yordan Karadzhov
2021-11-22 13:44 ` James Bottomley
2021-11-22 15:00 ` Yordan Karadzhov
2021-11-22 15:47 ` James Bottomley
2021-11-22 16:15 ` Yordan Karadzhov
2021-11-19 14:26 ` Yordan Karadzhov
2021-11-18 21:24 ` Mike Rapoport
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--subject='Re: [RFC PATCH 0/4] namespacefs: Proof-of-Concept' \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).