LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* apparmor: global buffers spin lock may get contended
@ 2021-07-13 13:19 Sergey Senozhatsky
  2021-08-15  9:47 ` John Johansen
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Sergey Senozhatsky @ 2021-07-13 13:19 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior, John Johansen
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

Hi,

We've notices that apparmor has switched from using per-CPU buffer pool
and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.

This seems to be causing some contention on our build machines (with
quite a bit of cores). Because that global spin lock is a part of the
stat() sys call (and perhaps some other)

E.g.

-    9.29%     0.00%  clang++          [kernel.vmlinux]                        
   - 9.28% entry_SYSCALL_64_after_hwframe                                      
      - 8.98% do_syscall_64                                                    
         - 7.43% __do_sys_newlstat                                            
            - 7.43% vfs_statx                                                  
               - 7.18% security_inode_getattr                                  
                  - 7.15% apparmor_inode_getattr                              
                     - aa_path_perm                                            
                        - 3.53% aa_get_buffer                                  
                           - 3.47% _raw_spin_lock                              
                                3.44% native_queued_spin_lock_slowpath        
                        - 3.49% aa_put_buffer.part.0                          
                           - 3.45% _raw_spin_lock                              
                                3.43% native_queued_spin_lock_slowpath   

Can we fix this contention?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
@ 2021-08-15  9:47 ` John Johansen
  2022-10-28  9:34 ` John Johansen
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
  2 siblings, 0 replies; 7+ messages in thread
From: John Johansen @ 2021-08-15  9:47 UTC (permalink / raw)
  To: Sergey Senozhatsky, Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

On 7/13/21 6:19 AM, Sergey Senozhatsky wrote:
> Hi,
> 
> We've notices that apparmor has switched from using per-CPU buffer pool
> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
> 
> This seems to be causing some contention on our build machines (with
> quite a bit of cores). Because that global spin lock is a part of the
> stat() sys call (and perhaps some other)
> 
> E.g.
> 
> -    9.29%     0.00%  clang++          [kernel.vmlinux]                        
>    - 9.28% entry_SYSCALL_64_after_hwframe                                      
>       - 8.98% do_syscall_64                                                    
>          - 7.43% __do_sys_newlstat                                            
>             - 7.43% vfs_statx                                                  
>                - 7.18% security_inode_getattr                                  
>                   - 7.15% apparmor_inode_getattr                              
>                      - aa_path_perm                                            
>                         - 3.53% aa_get_buffer                                  
>                            - 3.47% _raw_spin_lock                              
>                                 3.44% native_queued_spin_lock_slowpath        
>                         - 3.49% aa_put_buffer.part.0                          
>                            - 3.45% _raw_spin_lock                              
>                                 3.43% native_queued_spin_lock_slowpath   
> 
> Can we fix this contention?
> 

sorry this got filtered to a wrong mailbox. Yes this is something that can
be improved, and was a concern when the switch was made from per-CPU buffers
to the global pool.

We can look into doing a hybrid approach where we can per cpu cache a buffer
from the global pool. The trick will be coming up with when the cached buffer
can be returned so we don't run into the problems that lead to
df323337e507a0009d3db1ea

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
  2021-08-15  9:47 ` John Johansen
@ 2022-10-28  9:34 ` John Johansen
  2022-10-31  3:52   ` Sergey Senozhatsky
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
  2 siblings, 1 reply; 7+ messages in thread
From: John Johansen @ 2022-10-28  9:34 UTC (permalink / raw)
  To: Sergey Senozhatsky, Sebastian Andrzej Siewior
  Cc: Peter Zijlstra, Tomasz Figa, linux-kernel, linux-security-module

On 7/13/21 06:19, Sergey Senozhatsky wrote:
> Hi,
> 
> We've notices that apparmor has switched from using per-CPU buffer pool
> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
> 
> This seems to be causing some contention on our build machines (with
> quite a bit of cores). Because that global spin lock is a part of the
> stat() sys call (and perhaps some other)
> 
> E.g.
> 
> -    9.29%     0.00%  clang++          [kernel.vmlinux]
>     - 9.28% entry_SYSCALL_64_after_hwframe
>        - 8.98% do_syscall_64
>           - 7.43% __do_sys_newlstat
>              - 7.43% vfs_statx
>                 - 7.18% security_inode_getattr
>                    - 7.15% apparmor_inode_getattr
>                       - aa_path_perm
>                          - 3.53% aa_get_buffer
>                             - 3.47% _raw_spin_lock
>                                  3.44% native_queued_spin_lock_slowpath
>                          - 3.49% aa_put_buffer.part.0
>                             - 3.45% _raw_spin_lock
>                                  3.43% native_queued_spin_lock_slowpath
> 
> Can we fix this contention?

sorry for the delay on this. Below is a proposed patch that I have been testing
to deal with this issue.


 From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
From: John Johansen <john.johansen@canonical.com>
Date: Tue, 25 Oct 2022 01:18:41 -0700
Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
  contention

On a heavily loaded machine there can be lock contention on the
global buffers lock. Add a percpu list to cache buffers on when
lock contention is encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.  The amount of hold time is rapidly
increased and slow ramped down.

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 74 ++++++++++++++++++++++++++++++++++++++---
  1 file changed, 69 insertions(+), 5 deletions(-)

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 25114735bc11..0ab70171bdb6 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -49,12 +49,19 @@ union aa_buffer {
  	char buffer[1];
  };
  
+struct aa_local_cache {
+	unsigned int contention;
+	unsigned int hold;
+	struct list_head head;
+};
+
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
  
  static LIST_HEAD(aa_global_buffers);
  static DEFINE_SPINLOCK(aa_buffers_lock);
+static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
  
  /*
   * LSM hook functions
@@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
  	return 0;
  }
  
+static void update_contention(struct aa_local_cache *cache)
+{
+	cache->contention += 3;
+	if (cache->contention > 9)
+		cache->contention = 9;
+	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
+}
+
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  	bool try_again = true;
  	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  
+	/* use per cpu cached buffers first */
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!list_empty(&cache->head)) {
+		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
+		list_del(&aa_buf->list);
+		cache->hold--;
+		put_cpu_ptr(&aa_local_buffers);
+		return &aa_buf->buffer[0];
+	}
+	put_cpu_ptr(&aa_local_buffers);
+
+	if (!spin_trylock(&aa_buffers_lock)) {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		put_cpu_ptr(&aa_local_buffers);
+		spin_lock(&aa_buffers_lock);
+	} else {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		if (cache->contention)
+			cache->contention--;
+		put_cpu_ptr(&aa_local_buffers);
+	}
  retry:
-	spin_lock(&aa_buffers_lock);
  	if (buffer_count > reserve_count ||
  	    (in_atomic && !list_empty(&aa_global_buffers))) {
  		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
@@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic)
  	if (!aa_buf) {
  		if (try_again) {
  			try_again = false;
+			spin_lock(&aa_buffers_lock);
  			goto retry;
  		}
  		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
@@ -1666,15 +1704,32 @@ char *aa_get_buffer(bool in_atomic)
  void aa_put_buffer(char *buf)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  
  	if (!buf)
  		return;
  	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
  
-	spin_lock(&aa_buffers_lock);
-	list_add(&aa_buf->list, &aa_global_buffers);
-	buffer_count++;
-	spin_unlock(&aa_buffers_lock);
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!cache->hold) {
+		put_cpu_ptr(&aa_local_buffers);
+		if (spin_trylock(&aa_buffers_lock)) {
+			list_add(&aa_buf->list, &aa_global_buffers);
+			buffer_count++;
+			spin_unlock(&aa_buffers_lock);
+			cache = get_cpu_ptr(&aa_local_buffers);
+			if (cache->contention)
+				cache->contention--;
+			put_cpu_ptr(&aa_local_buffers);
+			return;
+		}
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+	}
+
+	/* cache in percpu list */
+	list_add(&aa_buf->list, &cache->head);
+	put_cpu_ptr(&aa_local_buffers);
  }
  
  /*
@@ -1716,6 +1771,15 @@ static int __init alloc_buffers(void)
  	union aa_buffer *aa_buf;
  	int i, num;
  
+	/*
+	 * per cpu set of cached allocated buffers used to help reduce
+	 * lock contention
+	 */
+	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).contention = 0;
+		per_cpu(aa_local_buffers, i).hold = 0;
+		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
+	}
  	/*
  	 * A function may require two buffers at once. Usually the buffers are
  	 * used for a short period of time and are shared. On UP kernel buffers
-- 
2.34.1




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
       [not found] ` <20221030013028.3557-1-hdanton@sina.com>
@ 2022-10-30  6:32   ` John Johansen
  0 siblings, 0 replies; 7+ messages in thread
From: John Johansen @ 2022-10-30  6:32 UTC (permalink / raw)
  To: Hillf Danton
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On 10/29/22 18:30, Hillf Danton wrote:
> On 28 Oct 2022 02:34:07 -0700 John Johansen <john.johansen@canonical.com>
>> On 7/13/21 06:19, Sergey Senozhatsky wrote:
>>> Hi,
>>>
>>> We've notices that apparmor has switched from using per-CPU buffer pool
>>> and per-CPU spin_lock to a global spin_lock in df323337e507a0009d3db1ea.
>>>
>>> This seems to be causing some contention on our build machines (with
>>> quite a bit of cores). Because that global spin lock is a part of the
>>> stat() sys call (and perhaps some other)
>>>
>>> E.g.
>>>
>>> -    9.29%     0.00%  clang++          [kernel.vmlinux]
>>>      - 9.28% entry_SYSCALL_64_after_hwframe
>>>         - 8.98% do_syscall_64
>>>            - 7.43% __do_sys_newlstat
>>>               - 7.43% vfs_statx
>>>                  - 7.18% security_inode_getattr
>>>                     - 7.15% apparmor_inode_getattr
>>>                        - aa_path_perm
>>>                           - 3.53% aa_get_buffer
>>>                              - 3.47% _raw_spin_lock
>>>                                   3.44% native_queued_spin_lock_slowpath
>>>                           - 3.49% aa_put_buffer.part.0
>>>                              - 3.45% _raw_spin_lock
>>>                                   3.43% native_queued_spin_lock_slowpath
>>>
>>> Can we fix this contention?
>>
>> sorry for the delay on this. Below is a proposed patch that I have been testing
>> to deal with this issue.
>>
>>
>>   From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
>> From: John Johansen <john.johansen@canonical.com>
>> Date: Tue, 25 Oct 2022 01:18:41 -0700
>> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock contention
>>
>> On a heavily loaded machine there can be lock contention on the
>> global buffers lock. Add a percpu list to cache buffers on when
>> lock contention is encountered.
>>
>> When allocating buffers attempt to use cached buffers first,
>> before taking the global buffers lock. When freeing buffers
>> try to put them back to the global list but if contention is
>> encountered, put the buffer on the percpu list.
>>
>> The length of time a buffer is held on the percpu list is dynamically
>> adjusted based on lock contention.  The amount of hold time is rapidly
>> increased and slow ramped down.
>>
>> Signed-off-by: John Johansen <john.johansen@canonical.com>
>> ---
>>    security/apparmor/lsm.c | 74 ++++++++++++++++++++++++++++++++++++++---
>>    1 file changed, 69 insertions(+), 5 deletions(-)
>>
>> diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
>> index 25114735bc11..0ab70171bdb6 100644
>> --- a/security/apparmor/lsm.c
>> +++ b/security/apparmor/lsm.c
>> @@ -49,12 +49,19 @@ union aa_buffer {
>>    	char buffer[1];
>>    };
>>    
>> +struct aa_local_cache {
>> +	unsigned int contention;
>> +	unsigned int hold;
>> +	struct list_head head;
>> +};
>> +
>>    #define RESERVE_COUNT 2
>>    static int reserve_count = RESERVE_COUNT;
>>    static int buffer_count;
>>    
>>    static LIST_HEAD(aa_global_buffers);
>>    static DEFINE_SPINLOCK(aa_buffers_lock);
>> +static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
>>    
>>    /*
>>     * LSM hook functions
>> @@ -1622,14 +1629,44 @@ static int param_set_mode(const char *val, const struct kernel_param *kp)
>>    	return 0;
>>    }
>>    
>> +static void update_contention(struct aa_local_cache *cache)
>> +{
>> +	cache->contention += 3;
>> +	if (cache->contention > 9)
>> +		cache->contention = 9;
>> +	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
>> +}
>> +
>>    char *aa_get_buffer(bool in_atomic)
>>    {
>>    	union aa_buffer *aa_buf;
>> +	struct aa_local_cache *cache;
>>    	bool try_again = true;
>>    	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
>>    
>> +	/* use per cpu cached buffers first */
>> +	cache = get_cpu_ptr(&aa_local_buffers);
>> +	if (!list_empty(&cache->head)) {
>> +		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
>> +		list_del(&aa_buf->list);
>> +		cache->hold--;
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		return &aa_buf->buffer[0];
>> +	}
>> +	put_cpu_ptr(&aa_local_buffers);
>> +
>> +	if (!spin_trylock(&aa_buffers_lock)) {
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		update_contention(cache);
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		spin_lock(&aa_buffers_lock);
>> +	} else {
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		if (cache->contention)
>> +			cache->contention--;
>> +		put_cpu_ptr(&aa_local_buffers);
>> +	}
>>    retry:
>> -	spin_lock(&aa_buffers_lock);
>>    	if (buffer_count > reserve_count ||
>>    	    (in_atomic && !list_empty(&aa_global_buffers))) {
>>    		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
>> @@ -1655,6 +1692,7 @@ char *aa_get_buffer(bool in_atomic)
>>    	if (!aa_buf) {
>>    		if (try_again) {
>>    			try_again = false;
>> +			spin_lock(&aa_buffers_lock);
>>    			goto retry;
>>    		}
>>    		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
>> @@ -1666,15 +1704,32 @@ char *aa_get_buffer(bool in_atomic)
>>    void aa_put_buffer(char *buf)
>>    {
>>    	union aa_buffer *aa_buf;
>> +	struct aa_local_cache *cache;
>>    
>>    	if (!buf)
>>    		return;
>>    	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
>>    
>> -	spin_lock(&aa_buffers_lock);
>> -	list_add(&aa_buf->list, &aa_global_buffers);
>> -	buffer_count++;
>> -	spin_unlock(&aa_buffers_lock);
>> +	cache = get_cpu_ptr(&aa_local_buffers);
>> +	if (!cache->hold) {
>> +		put_cpu_ptr(&aa_local_buffers);
>> +		if (spin_trylock(&aa_buffers_lock)) {
>> +			list_add(&aa_buf->list, &aa_global_buffers);
>> +			buffer_count++;
> 
> Given !hold and trylock, right time to drain the perpcu cache?
> 

yes hold is a count of how long (or in this case a count of how many
times) to allocate from the local from the percpu cache before trying
to return to the global buffer pool. When the time/count hits zero
its time to try and return it.

If we succeed the try lock then we succeeded taking the global buffer
pool lock without contention and we can add the buffer back in.

As for the other cases

hold == 0 and fail to grab the lock
- contention is recorded and we add the buffer back to the percpu cache

hold > 0
- decrease hold and add back to the percpu cache

Since we never try and grab the spinlock if hold > 0, the lock variations
do not need to be considered.

>> +			spin_unlock(&aa_buffers_lock);
>> +			cache = get_cpu_ptr(&aa_local_buffers);
>> +			if (cache->contention)
>> +				cache->contention--;
>> +			put_cpu_ptr(&aa_local_buffers);
>> +			return;
>> +		}
>> +		cache = get_cpu_ptr(&aa_local_buffers);
>> +		update_contention(cache);
>> +	}
>> +
>> +	/* cache in percpu list */
>> +	list_add(&aa_buf->list, &cache->head);
>> +	put_cpu_ptr(&aa_local_buffers);
>>    }
>>    
>>    /*
>> @@ -1716,6 +1771,15 @@ static int __init alloc_buffers(void)
>>    	union aa_buffer *aa_buf;
>>    	int i, num;
>>    
>> +	/*
>> +	 * per cpu set of cached allocated buffers used to help reduce
>> +	 * lock contention
>> +	 */
>> +	for_each_possible_cpu(i) {
>> +		per_cpu(aa_local_buffers, i).contention = 0;
>> +		per_cpu(aa_local_buffers, i).hold = 0;
>> +		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
>> +	}
>>    	/*
>>    	 * A function may require two buffers at once. Usually the buffers are
>>    	 * used for a short period of time and are shared. On UP kernel buffers
>> -- 
>> 2.34.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-28  9:34 ` John Johansen
@ 2022-10-31  3:52   ` Sergey Senozhatsky
  2022-10-31  3:55     ` John Johansen
  0 siblings, 1 reply; 7+ messages in thread
From: Sergey Senozhatsky @ 2022-10-31  3:52 UTC (permalink / raw)
  To: John Johansen
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On (22/10/28 02:34), John Johansen wrote:
> From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
> From: John Johansen <john.johansen@canonical.com>
> Date: Tue, 25 Oct 2022 01:18:41 -0700
> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
>  contention
> 
> On a heavily loaded machine there can be lock contention on the
> global buffers lock. Add a percpu list to cache buffers on when
> lock contention is encountered.
> 
> When allocating buffers attempt to use cached buffers first,
> before taking the global buffers lock. When freeing buffers
> try to put them back to the global list but if contention is
> encountered, put the buffer on the percpu list.
> 
> The length of time a buffer is held on the percpu list is dynamically
> adjusted based on lock contention.  The amount of hold time is rapidly
> increased and slow ramped down.
> 
> Signed-off-by: John Johansen <john.johansen@canonical.com>

Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-31  3:52   ` Sergey Senozhatsky
@ 2022-10-31  3:55     ` John Johansen
  2022-10-31  4:04       ` Sergey Senozhatsky
  0 siblings, 1 reply; 7+ messages in thread
From: John Johansen @ 2022-10-31  3:55 UTC (permalink / raw)
  To: Sergey Senozhatsky
  Cc: Sebastian Andrzej Siewior, Peter Zijlstra, Tomasz Figa,
	linux-kernel, linux-security-module

On 10/30/22 20:52, Sergey Senozhatsky wrote:
> On (22/10/28 02:34), John Johansen wrote:
>>  From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
>> From: John Johansen <john.johansen@canonical.com>
>> Date: Tue, 25 Oct 2022 01:18:41 -0700
>> Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
>>   contention
>>
>> On a heavily loaded machine there can be lock contention on the
>> global buffers lock. Add a percpu list to cache buffers on when
>> lock contention is encountered.
>>
>> When allocating buffers attempt to use cached buffers first,
>> before taking the global buffers lock. When freeing buffers
>> try to put them back to the global list but if contention is
>> encountered, put the buffer on the percpu list.
>>
>> The length of time a buffer is held on the percpu list is dynamically
>> adjusted based on lock contention.  The amount of hold time is rapidly
>> increased and slow ramped down.
>>
>> Signed-off-by: John Johansen <john.johansen@canonical.com>
> 
> Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>

yep, thanks for catching that


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: apparmor: global buffers spin lock may get contended
  2022-10-31  3:55     ` John Johansen
@ 2022-10-31  4:04       ` Sergey Senozhatsky
  0 siblings, 0 replies; 7+ messages in thread
From: Sergey Senozhatsky @ 2022-10-31  4:04 UTC (permalink / raw)
  To: John Johansen
  Cc: Sergey Senozhatsky, Sebastian Andrzej Siewior, Peter Zijlstra,
	Tomasz Figa, linux-kernel, linux-security-module

On (22/10/30 20:55), John Johansen wrote:
> On 10/30/22 20:52, Sergey Senozhatsky wrote:
> > On (22/10/28 02:34), John Johansen wrote:
> > >  From d026988196fdbda7234fb87bc3e4aea22edcbaf9 Mon Sep 17 00:00:00 2001
> > > From: John Johansen <john.johansen@canonical.com>
> > > Date: Tue, 25 Oct 2022 01:18:41 -0700
> > > Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
> > >   contention
> > > 
> > > On a heavily loaded machine there can be lock contention on the
> > > global buffers lock. Add a percpu list to cache buffers on when
> > > lock contention is encountered.
> > > 
> > > When allocating buffers attempt to use cached buffers first,
> > > before taking the global buffers lock. When freeing buffers
> > > try to put them back to the global list but if contention is
> > > encountered, put the buffer on the percpu list.
> > > 
> > > The length of time a buffer is held on the percpu list is dynamically
> > > adjusted based on lock contention.  The amount of hold time is rapidly
> > > increased and slow ramped down.
> > > 
> > > Signed-off-by: John Johansen <john.johansen@canonical.com>
> > 
> > Reported-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> 
> yep, thanks for catching that

Thanks for the patch! Unfortunately it'll be a bit difficult to test
it right now; I'll probably have to wait until corp pushes new kernel
(with the patch) to build boxes.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-10-31  4:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-13 13:19 apparmor: global buffers spin lock may get contended Sergey Senozhatsky
2021-08-15  9:47 ` John Johansen
2022-10-28  9:34 ` John Johansen
2022-10-31  3:52   ` Sergey Senozhatsky
2022-10-31  3:55     ` John Johansen
2022-10-31  4:04       ` Sergey Senozhatsky
     [not found] ` <20221030013028.3557-1-hdanton@sina.com>
2022-10-30  6:32   ` John Johansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).