LKML Archive on lore.kernel.org help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org> To: ebiederm@xmission.com (Eric W. Biederman) Cc: "Yordan Karadzhov \(VMware\)" <y.karadz@gmail.com>, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, viro@zeniv.linux.org.uk, mingo@redhat.com, hagen@jauu.net, rppt@kernel.org, James.Bottomley@HansenPartnership.com, akpm@linux-foundation.org, vvs@virtuozzo.com, shakeelb@google.com, christian.brauner@ubuntu.com, mkoutny@suse.com, Linux Containers <containers@lists.linux.dev> Subject: Re: [RFC PATCH 0/4] namespacefs: Proof-of-Concept Date: Thu, 18 Nov 2021 14:36:34 -0500 [thread overview] Message-ID: <20211118143634.3f7d43e9@gandalf.local.home> (raw) In-Reply-To: <87pmqxuv4n.fsf@email.froward.int.ebiederm.org> On Thu, 18 Nov 2021 13:22:16 -0600 ebiederm@xmission.com (Eric W. Biederman) wrote: > Steven Rostedt <rostedt@goodmis.org> writes: > > > I am refreshing my nack on the concept. My nack has been in place for > good technical reasons since about 2006. I'll admit, we are new to this, as we are now trying to add more visibility into the workings of things like kubernetes. And having a way of knowing what containers are running and how to monitor them is needed, and we need to do this for all container infrastructures. > > I see no way forward. I do not see a compelling use case. What do you use to debug issues in a kubernetes cluster of hundreds of machines running thousands of containers? Currently, if something is amiss, a node is restarted in the hopes that the issue does not appear again. But we would like to add infrastructure that takes advantage of tracing and profiling to be able to narrow that down. But to do so, we need to understand what tasks belong to what containers. > > There have been many conversations in the past attempt to implement > something that requires a namespace of namespaces and they have never > gotten anywhere. We are not asking about a "namespace" of namespaces, but a filesystem (one, not a namespace of one), that holds the information at the system scale, not a container view. I would be happy to implement something that makes a container having this file system available "special" as most containers do not need this. > > I see no attempt a due diligence or of actually understanding what > hierarchy already exists in namespaces. This is not trivial. What did we miss? > > I don't mean to be nasty but I do mean to be clear. Without a > compelling new idea in this space I see no hope of an implementation. > > What they are attempting to do makes it impossible to migrate a set of > process that uses this feature from one machine to another. AKA this > would be a breaking change and a regression if merged. The point of this is not to allow that migration. I'd be happy to add that if a container has access to this file system, it is pinned to the system and can not be migrated. The whole point of this file system is to monitor all containers no the system, and it makes no sense in migrating it. We would duplicate it over several systems, but there's no reason to move it once it is running. > > The breaking and regression are caused by assigning names to namespaces > without putting those names into a namespace of their own. That > appears fundamental to the concept not to the implementation. If you think this should be migrated then yes, it is broken. But we don't want this to work across migrations. That defeats the purpose of this work. > > Since the concept if merged would cause a regression it qualifies for > a nack. > > We can explore what problems they are trying to solve with this and > explore other ways to solve those problems. All I saw was a comment > about monitoring tools and wanting a global view. I did not see > any comments about dealing with all of the reasons why a global view > tends to be a bad idea. If you only care about a working environment of the system that runs a set of containers, how is that a bad idea. Again, I'm happy with implementing something that makes having this file system prevent it from being migrated. A pinned privileged container. > > I should have added that we have to some extent a way to walk through > namespaces using ioctls on nsfs inodes. How robust is this? And is there a library or tooling around it? -- Steve
next prev parent reply other threads:[~2021-11-18 19:36 UTC|newest] Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-11-18 18:12 [RFC PATCH 0/4] namespacefs: Proof-of-Concept Yordan Karadzhov (VMware) 2021-11-18 18:12 ` [RFC PATCH 1/4] namespacefs: Introduce 'namespacefs' Yordan Karadzhov (VMware) 2021-11-18 18:12 ` [RFC PATCH 2/4] namespacefs: Add methods to create/remove PID namespace directories Yordan Karadzhov (VMware) 2021-11-18 18:12 ` [RFC PATCH 3/4] namespacefs: Couple namespacefs to the PID namespace Yordan Karadzhov (VMware) 2021-11-18 18:12 ` [RFC PATCH 4/4] namespacefs: Couple namespacefs to the UTS namespace Yordan Karadzhov (VMware) 2021-11-18 18:55 ` [RFC PATCH 0/4] namespacefs: Proof-of-Concept Eric W. Biederman 2021-11-18 19:02 ` Steven Rostedt 2021-11-18 19:22 ` Eric W. Biederman 2021-11-18 19:36 ` Steven Rostedt [this message] 2021-11-18 19:24 ` Steven Rostedt 2021-11-19 9:50 ` Kirill Tkhai 2021-11-19 12:45 ` James Bottomley [not found] ` <20211119092758.1012073e@gandalf.local.home> 2021-11-19 16:42 ` James Bottomley 2021-11-19 17:14 ` Yordan Karadzhov 2021-11-19 17:22 ` Steven Rostedt 2021-11-19 23:22 ` James Bottomley 2021-11-20 0:07 ` Steven Rostedt 2021-11-20 0:14 ` James Bottomley [not found] ` <f6ca1f5bdb3b516688f291d9685a6a59f49f1393.camel@HansenPartnership.com> 2021-11-19 16:47 ` Steven Rostedt 2021-11-19 16:49 ` Steven Rostedt 2021-11-19 23:08 ` James Bottomley 2021-11-22 13:02 ` Yordan Karadzhov 2021-11-22 13:44 ` James Bottomley 2021-11-22 15:00 ` Yordan Karadzhov 2021-11-22 15:47 ` James Bottomley 2021-11-22 16:15 ` Yordan Karadzhov 2021-11-19 14:26 ` Yordan Karadzhov 2021-11-18 21:24 ` Mike Rapoport
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20211118143634.3f7d43e9@gandalf.local.home \ --to=rostedt@goodmis.org \ --cc=James.Bottomley@HansenPartnership.com \ --cc=akpm@linux-foundation.org \ --cc=christian.brauner@ubuntu.com \ --cc=containers@lists.linux.dev \ --cc=ebiederm@xmission.com \ --cc=hagen@jauu.net \ --cc=linux-fsdevel@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@redhat.com \ --cc=mkoutny@suse.com \ --cc=rppt@kernel.org \ --cc=shakeelb@google.com \ --cc=viro@zeniv.linux.org.uk \ --cc=vvs@virtuozzo.com \ --cc=y.karadz@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).