LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
From: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
To: Ingo Molnar <mingo@kernel.org>, Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
Mel Gorman <mgorman@techsingularity.net>,
Rik van Riel <riel@surriel.com>,
Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
Thomas Gleixner <tglx@linutronix.de>
Subject: [PATCH 13/19] mm/migrate: Use xchg instead of spinlock
Date: Mon, 4 Jun 2018 15:30:22 +0530 [thread overview]
Message-ID: <1528106428-19992-14-git-send-email-srikar@linux.vnet.ibm.com> (raw)
In-Reply-To: <1528106428-19992-1-git-send-email-srikar@linux.vnet.ibm.com>
Currently resetting the migrate rate limit is under a spinlock.
The spinlock will only serialize the migrate rate limiting and something
similar can actually be achieved by a simpler xchg.
Testcase Time: Min Max Avg StdDev
numa01.sh Real: 435.67 707.28 527.49 97.85
numa01.sh Sys: 76.41 231.19 162.49 56.13
numa01.sh User: 38247.36 59033.52 45129.31 7642.69
numa02.sh Real: 60.35 62.09 61.09 0.69
numa02.sh Sys: 15.01 30.20 20.64 5.56
numa02.sh User: 5195.93 5294.82 5240.99 40.55
numa03.sh Real: 752.04 919.89 836.81 63.29
numa03.sh Sys: 115.10 133.35 125.46 7.78
numa03.sh User: 58736.44 70084.26 65103.67 4416.10
numa04.sh Real: 418.43 709.69 512.53 104.17
numa04.sh Sys: 242.99 370.47 297.39 42.20
numa04.sh User: 34916.14 48429.54 38955.65 4928.05
numa05.sh Real: 379.27 434.05 403.70 17.79
numa05.sh Sys: 145.94 344.50 268.72 68.53
numa05.sh User: 32679.32 35449.75 33989.10 913.19
Testcase Time: Min Max Avg StdDev %Change
numa01.sh Real: 490.04 774.86 596.26 96.46 -11.5%
numa01.sh Sys: 151.52 242.88 184.82 31.71 -12.0%
numa01.sh User: 41418.41 60844.59 48776.09 6564.27 -7.47%
numa02.sh Real: 60.14 62.94 60.98 1.00 0.180%
numa02.sh Sys: 16.11 30.77 21.20 5.28 -2.64%
numa02.sh User: 5184.33 5311.09 5228.50 44.24 0.238%
numa03.sh Real: 790.95 856.35 826.41 24.11 1.258%
numa03.sh Sys: 114.93 118.85 117.05 1.63 7.184%
numa03.sh User: 60990.99 64959.28 63470.43 1415.44 2.573%
numa04.sh Real: 434.37 597.92 504.87 59.70 1.517%
numa04.sh Sys: 237.63 397.40 289.74 55.98 2.640%
numa04.sh User: 34854.87 41121.83 38572.52 2615.84 0.993%
numa05.sh Real: 386.77 448.90 417.22 22.79 -3.24%
numa05.sh Sys: 149.23 379.95 303.04 79.55 -11.3%
numa05.sh User: 32951.76 35959.58 34562.18 1034.05 -1.65%
Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
---
include/linux/mmzone.h | 3 ---
mm/migrate.c | 8 +++-----
mm/page_alloc.c | 1 -
3 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b0767703..0dbe1d5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -669,9 +669,6 @@ struct zonelist {
struct task_struct *kcompactd;
#endif
#ifdef CONFIG_NUMA_BALANCING
- /* Lock serializing the migrate rate limiting window */
- spinlock_t numabalancing_migrate_lock;
-
/* Rate limiting time interval */
unsigned long numabalancing_migrate_next_window;
diff --git a/mm/migrate.c b/mm/migrate.c
index 8c0af0f..1c55956 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1874,11 +1874,9 @@ static bool numamigrate_update_ratelimit(pg_data_t *pgdat,
* all the time is being spent migrating!
*/
if (time_after(jiffies, pgdat->numabalancing_migrate_next_window)) {
- spin_lock(&pgdat->numabalancing_migrate_lock);
- pgdat->numabalancing_migrate_nr_pages = 0;
- pgdat->numabalancing_migrate_next_window = jiffies +
- msecs_to_jiffies(migrate_interval_millisecs);
- spin_unlock(&pgdat->numabalancing_migrate_lock);
+ if (xchg(&pgdat->numabalancing_migrate_nr_pages, 0))
+ pgdat->numabalancing_migrate_next_window = jiffies +
+ msecs_to_jiffies(migrate_interval_millisecs);
}
if (pgdat->numabalancing_migrate_nr_pages > ratelimit_pages) {
trace_mm_numa_migrate_ratelimit(current, pgdat->node_id,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4526643..464a25c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6208,7 +6208,6 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat)
pgdat_resize_init(pgdat);
#ifdef CONFIG_NUMA_BALANCING
- spin_lock_init(&pgdat->numabalancing_migrate_lock);
pgdat->numabalancing_migrate_nr_pages = 0;
pgdat->active_node_migrate = 0;
pgdat->numabalancing_migrate_next_window = jiffies;
--
1.8.3.1
next prev parent reply other threads:[~2018-06-04 10:03 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-04 10:00 [PATCH 00/19] Fixes for sched/numa_balancing Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 01/19] sched/numa: Remove redundant field Srikar Dronamraju
2018-06-04 14:53 ` Rik van Riel
2018-06-05 8:41 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 02/19] sched/numa: Evaluate move once per node Srikar Dronamraju
2018-06-04 14:51 ` Rik van Riel
2018-06-04 15:45 ` Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 03/19] sched/numa: Simplify load_too_imbalanced Srikar Dronamraju
2018-06-04 14:57 ` Rik van Riel
2018-06-05 8:46 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 04/19] sched/numa: Set preferred_node based on best_cpu Srikar Dronamraju
2018-06-04 12:18 ` Peter Zijlstra
2018-06-04 12:53 ` Srikar Dronamraju
2018-06-04 12:23 ` Peter Zijlstra
2018-06-04 12:59 ` Srikar Dronamraju
2018-06-04 13:39 ` Peter Zijlstra
2018-06-04 13:48 ` Srikar Dronamraju
2018-06-04 14:37 ` Rik van Riel
2018-06-04 15:56 ` Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 05/19] sched/numa: Use task faults only if numa_group is not yet setup Srikar Dronamraju
2018-06-04 12:24 ` Peter Zijlstra
2018-06-04 13:09 ` Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 06/19] sched/debug: Reverse the order of printing faults Srikar Dronamraju
2018-06-04 16:28 ` Rik van Riel
2018-06-05 8:50 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 07/19] sched/numa: Skip nodes that are at hoplimit Srikar Dronamraju
2018-06-04 16:27 ` Rik van Riel
2018-06-05 8:50 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 08/19] sched/numa: Remove unused task_capacity from numa_stats Srikar Dronamraju
2018-06-04 16:28 ` Rik van Riel
2018-06-05 8:57 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 09/19] sched/numa: Modify migrate_swap to accept additional params Srikar Dronamraju
2018-06-04 17:00 ` Rik van Riel
2018-06-05 8:58 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 10/19] sched/numa: Stop multiple tasks from moving to the cpu at the same time Srikar Dronamraju
2018-06-04 17:57 ` Rik van Riel
2018-06-05 9:51 ` Mel Gorman
2018-06-04 10:00 ` [PATCH 11/19] sched/numa: Restrict migrating in parallel to the same node Srikar Dronamraju
2018-06-04 17:59 ` Rik van Riel
2018-06-05 9:53 ` Mel Gorman
2018-06-06 12:58 ` Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 12/19] sched:numa Remove numa_has_capacity Srikar Dronamraju
2018-06-04 18:07 ` Rik van Riel
2018-06-04 10:00 ` Srikar Dronamraju [this message]
2018-06-04 18:22 ` [PATCH 13/19] mm/migrate: Use xchg instead of spinlock Rik van Riel
2018-06-04 19:28 ` Peter Zijlstra
2018-06-05 7:24 ` Srikar Dronamraju
2018-06-05 8:16 ` Peter Zijlstra
2018-06-04 10:00 ` [PATCH 14/19] sched/numa: Updation of scan period need not be in lock Srikar Dronamraju
2018-06-04 18:24 ` Rik van Riel
2018-06-04 10:00 ` [PATCH 15/19] sched/numa: Use group_weights to identify if migration degrades locality Srikar Dronamraju
2018-06-04 18:56 ` Rik van Riel
2018-06-04 10:00 ` [PATCH 16/19] sched/numa: Detect if node actively handling migration Srikar Dronamraju
2018-06-04 20:05 ` Rik van Riel
2018-06-05 3:56 ` Srikar Dronamraju
2018-06-05 13:07 ` Rik van Riel
2018-06-06 12:55 ` Srikar Dronamraju
2018-06-06 13:55 ` Rik van Riel
2018-06-06 15:32 ` Srikar Dronamraju
2018-06-06 17:06 ` Rik van Riel
2018-06-04 10:00 ` [PATCH 17/19] sched/numa: Pass destination cpu as a parameter to migrate_task_rq Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 18/19] sched/numa: Reset scan rate whenever task moves across nodes Srikar Dronamraju
2018-06-04 20:08 ` Rik van Riel
2018-06-05 9:58 ` Mel Gorman
2018-06-06 13:47 ` Srikar Dronamraju
2018-06-04 10:00 ` [PATCH 19/19] sched/numa: Move task_placement closer to numa_migrate_preferred Srikar Dronamraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1528106428-19992-14-git-send-email-srikar@linux.vnet.ibm.com \
--to=srikar@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=tglx@linutronix.de \
--subject='Re: [PATCH 13/19] mm/migrate: Use xchg instead of spinlock' \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).