LKML Archive on
help / color / mirror / Atom feed
From: Balbir Singh <>
Cc:, Balbir Singh <>
Subject: [RFC][PATCH][0/4] Memory controller (RSS Control) (v2)
Date: Mon, 26 Feb 2007 11:44:28 +0530	[thread overview]
Message-ID: <20070226061428.28810.19037.sendpatchset@balbir-laptop> (raw)

This is a repost of the patches at
The previous post had a misleading subject which ended with a "(".

This patch applies on top of Paul Menage's container patches (V7) posted at

It implements a controller within the containers framework for limiting
memory usage (RSS usage).

The memory controller was discussed at length in the RFC posted to lkml

This is version 2 of the patch, version 1 was posted at

I have tried to incorporate all comments, more details can be found
in the changelog's of induvidual patches. Any remaining mistakes are
all my fault.

The next question could be why release version 2?

1. It serves a decision point to decide if we should move to a per-container
   LRU list. Walking through the global LRU is slow, in this patchset I've
   tried to address the LRU churning issue. The patch
   memcontrol-reclaim-on-limit has more details
2. I've included fixes for several of the comments/issues raised in version 1

Steps to use the controller
0. Download the patches, apply the patches
1. Turn on CONFIG_CONTAINER_MEMCONTROL in kernel config, build the kernel
   and boot into the new kernel
2. mount -t container container -o memcontrol /<mount point>
3. cd /<mount point>
   optionally do (mkdir <directory>; cd <directory>) under /<mount point>
4. echo $$ > tasks (attaches the current shell to the container)
5. echo -n (limit value) > memcontrol_limit
6. cat memcontrol_usage
7. Run tasks, check the usage of the controller, reclaim behaviour
8. Report bugs, get bug fixes and iterate (goto step 0).

Advantages of the patchset
1. Zero overhead in struct page (struct page is not expanded)
2. Minimal changes to the core-mm code
3. Shared pages are not reclaimed unless all mappings belong to overlimit
4. It can be used to debug drivers/applications/kernel components in a
   constrained memory environment (similar to mem=XXX option), except that
   several containers can be created simultaneously without rebooting and
   the limits can be changed. NOTE: There is no support for limiting
   kernel memory allocations and page cache control (presently).

Created containers, attached tasks to containers with lower limits than
the memory the tasks require (memory hog tests) and ran some basic tests on
Tested the patches on UML and PowerPC. On UML tried the patches with the
config enabled and disabled (sanity check) and with containers enabled
but the memory controller disabled.

TODO's and improvement areas
1. Come up with cool page replacement algorithms for containers - still holds
   good (if possible without any changes to struct page)
2. Add page cache control
3. Add kernel memory allocator control
4. Extract benchmark numbers and overhead data

Comments & criticism are welcome.


	Warm Regards,
	Balbir Singh

             reply	other threads:[~2007-02-26  6:16 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-02-26  6:14 Balbir Singh [this message]
2007-02-26  6:14 ` [RFC][PATCH][1/4] RSS controller setup (v2) Balbir Singh
2007-02-26  6:14 ` [RFC][PATCH][2/4] Add RSS accounting and control (v2) Balbir Singh
2007-02-26  6:14 ` [RFC][PATCH][3/4] Add reclaim support (v2) Balbir Singh
2007-02-26  6:15 ` [RFC][PATCH][4/4] RSS controller documentation (v2) Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070226061428.28810.19037.sendpatchset@balbir-laptop \ \ \ \
    --subject='Re: [RFC][PATCH][0/4] Memory controller (RSS Control) (v2)' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).