Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF
@ 2020-08-19  9:24 Lorenz Bauer
  2020-08-19  9:24 ` [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization Lorenz Bauer
                   ` (5 more replies)
  0 siblings, 6 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, Lorenz Bauer, netdev, bpf

We're currently building a control plane for our BPF socket dispatch
work. As part of that, we have a need to create a copy of an existing
sockhash, to allow us to change the keys. I previously proposed allowing
privileged userspace to look up sockets, which doesn't work due to
security concerns (see [1]).

In follow up discussions during BPF office hours we identified bpf_iter
as a possible solution: instead of accessing sockets from user space
we can iterate the source sockhash, and insert the values into a new
map. Enabling this requires two pieces: the ability to iterate
sockmap and sockhash, as well as being able to call map_update_elem
from BPF.

This patch set implements the latter: it's now possible to update
sockmap from BPF context. As a next step, we can implement bpf_iter
for sockmap.

The patches are organised as follows:
* Patches 1-3 are cleanups and simplifications, to make reasoning
  about the subsequent patches easier.
* Patch 4 makes map_update_elem return a PTR_TO_SOCKET_OR_NULL for
  sockmap / sockhash lookups.
* Patch 5 enables map_update_elem from BPF. There is some locking
  here that I'm not entirely sure about. Feedback much appreciated.
* Patch 6 adds a selftest.

1: https://lore.kernel.org/bpf/20200310174711.7490-1-lmb@cloudflare.com/

Lorenz Bauer (6):
  net: sk_msg: simplify sk_psock initialization
  bpf: sockmap: merge sockmap and sockhash update functions
  bpf: sockmap: call sock_map_update_elem directly
  bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and
    sockhash
  bpf: sockmap: allow update from BPF
  selftests: bpf: test sockmap update from BPF

 include/linux/bpf.h                           |   7 +
 include/linux/skmsg.h                         |  17 ---
 kernel/bpf/syscall.c                          |   5 +-
 kernel/bpf/verifier.c                         |  46 +++++-
 net/core/skmsg.c                              |  34 ++++-
 net/core/sock_map.c                           | 137 ++++++++----------
 net/ipv4/tcp_bpf.c                            |  13 +-
 net/ipv4/udp_bpf.c                            |   9 +-
 .../selftests/bpf/prog_tests/sockmap_basic.c  |  76 ++++++++++
 .../selftests/bpf/progs/test_sockmap_copy.c   |  48 ++++++
 10 files changed, 274 insertions(+), 118 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c

-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 20:05   ` John Fastabend
  2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Daniel Borkmann, Lorenz Bauer,
	David S. Miller, Jakub Kicinski, Eric Dumazet, Alexey Kuznetsov,
	Hideaki YOSHIFUJI, Alexei Starovoitov
  Cc: kernel-team, netdev, bpf, linux-kernel

Initializing psock->sk_proto and other saved callbacks is only
done in sk_psock_update_proto, after sk_psock_init has returned.
The logic for this is difficult to follow, and needlessly complex.

Instead, initialize psock->sk_proto whenever we allocate a new
psock. Additionally, assert the following invariants:

* The SK has no ULP: ULP does it's own finagling of sk->sk_prot
* sk_user_data is unused: we need it to store sk_psock

Protect our access to sk_user_data with sk_callback_lock, which
is what other users like reuseport arrays, etc. do.

The result is that an sk_psock is always fully initialized, and
that psock->sk_proto is always the "original" struct proto.
The latter allows us to use psock->sk_proto when initializing
IPv6 TCP / UDP callbacks for sockmap.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 include/linux/skmsg.h | 17 -----------------
 net/core/skmsg.c      | 34 ++++++++++++++++++++++++++++------
 net/core/sock_map.c   | 14 ++++----------
 net/ipv4/tcp_bpf.c    | 13 +++++--------
 net/ipv4/udp_bpf.c    |  9 ++++-----
 5 files changed, 41 insertions(+), 46 deletions(-)

diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index 1e9ed840b9fc..3119928fc103 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -340,23 +340,6 @@ static inline void sk_psock_update_proto(struct sock *sk,
 					 struct sk_psock *psock,
 					 struct proto *ops)
 {
-	/* Initialize saved callbacks and original proto only once, since this
-	 * function may be called multiple times for a psock, e.g. when
-	 * psock->progs.msg_parser is updated.
-	 *
-	 * Since we've not installed the new proto, psock is not yet in use and
-	 * we can initialize it without synchronization.
-	 */
-	if (!psock->sk_proto) {
-		struct proto *orig = READ_ONCE(sk->sk_prot);
-
-		psock->saved_unhash = orig->unhash;
-		psock->saved_close = orig->close;
-		psock->saved_write_space = sk->sk_write_space;
-
-		psock->sk_proto = orig;
-	}
-
 	/* Pairs with lockless read in sk_clone_lock() */
 	WRITE_ONCE(sk->sk_prot, ops);
 }
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 6a32a1fd34f8..1c81caf9630f 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -494,14 +494,34 @@ static void sk_psock_backlog(struct work_struct *work)
 
 struct sk_psock *sk_psock_init(struct sock *sk, int node)
 {
-	struct sk_psock *psock = kzalloc_node(sizeof(*psock),
-					      GFP_ATOMIC | __GFP_NOWARN,
-					      node);
-	if (!psock)
-		return NULL;
+	struct sk_psock *psock;
+	struct proto *prot;
 
+	write_lock_bh(&sk->sk_callback_lock);
+
+	if (inet_csk_has_ulp(sk)) {
+		psock = ERR_PTR(-EINVAL);
+		goto out;
+	}
+
+	if (sk->sk_user_data) {
+		psock = ERR_PTR(-EBUSY);
+		goto out;
+	}
+
+	psock = kzalloc_node(sizeof(*psock), GFP_ATOMIC | __GFP_NOWARN, node);
+	if (!psock) {
+		psock = ERR_PTR(-ENOMEM);
+		goto out;
+	}
+
+	prot = READ_ONCE(sk->sk_prot);
 	psock->sk = sk;
-	psock->eval =  __SK_NONE;
+	psock->eval = __SK_NONE;
+	psock->sk_proto = prot;
+	psock->saved_unhash = prot->unhash;
+	psock->saved_close = prot->close;
+	psock->saved_write_space = sk->sk_write_space;
 
 	INIT_LIST_HEAD(&psock->link);
 	spin_lock_init(&psock->link_lock);
@@ -516,6 +536,8 @@ struct sk_psock *sk_psock_init(struct sock *sk, int node)
 	rcu_assign_sk_user_data_nocopy(sk, psock);
 	sock_hold(sk);
 
+out:
+	write_unlock_bh(&sk->sk_callback_lock);
 	return psock;
 }
 EXPORT_SYMBOL_GPL(sk_psock_init);
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 119f52a99dc1..abe4bac40db9 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -184,8 +184,6 @@ static int sock_map_init_proto(struct sock *sk, struct sk_psock *psock)
 {
 	struct proto *prot;
 
-	sock_owned_by_me(sk);
-
 	switch (sk->sk_type) {
 	case SOCK_STREAM:
 		prot = tcp_bpf_get_proto(sk, psock);
@@ -272,8 +270,8 @@ static int sock_map_link(struct bpf_map *map, struct sk_psock_progs *progs,
 		}
 	} else {
 		psock = sk_psock_init(sk, map->numa_node);
-		if (!psock) {
-			ret = -ENOMEM;
+		if (IS_ERR(psock)) {
+			ret = PTR_ERR(psock);
 			goto out_progs;
 		}
 	}
@@ -322,8 +320,8 @@ static int sock_map_link_no_progs(struct bpf_map *map, struct sock *sk)
 
 	if (!psock) {
 		psock = sk_psock_init(sk, map->numa_node);
-		if (!psock)
-			return -ENOMEM;
+		if (IS_ERR(psock))
+			return PTR_ERR(psock);
 	}
 
 	ret = sock_map_init_proto(sk, psock);
@@ -478,8 +476,6 @@ static int sock_map_update_common(struct bpf_map *map, u32 idx,
 		return -EINVAL;
 	if (unlikely(idx >= map->max_entries))
 		return -E2BIG;
-	if (inet_csk_has_ulp(sk))
-		return -EINVAL;
 
 	link = sk_psock_init_link();
 	if (!link)
@@ -855,8 +851,6 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
 	WARN_ON_ONCE(!rcu_read_lock_held());
 	if (unlikely(flags > BPF_EXIST))
 		return -EINVAL;
-	if (inet_csk_has_ulp(sk))
-		return -EINVAL;
 
 	link = sk_psock_init_link();
 	if (!link)
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 7aa68f4aae6c..37f4cb2bba5c 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -567,10 +567,9 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
 	prot[TCP_BPF_TX].sendpage		= tcp_bpf_sendpage;
 }
 
-static void tcp_bpf_check_v6_needs_rebuild(struct sock *sk, struct proto *ops)
+static void tcp_bpf_check_v6_needs_rebuild(struct proto *ops)
 {
-	if (sk->sk_family == AF_INET6 &&
-	    unlikely(ops != smp_load_acquire(&tcpv6_prot_saved))) {
+	if (unlikely(ops != smp_load_acquire(&tcpv6_prot_saved))) {
 		spin_lock_bh(&tcpv6_prot_lock);
 		if (likely(ops != tcpv6_prot_saved)) {
 			tcp_bpf_rebuild_protos(tcp_bpf_prots[TCP_BPF_IPV6], ops);
@@ -603,13 +602,11 @@ struct proto *tcp_bpf_get_proto(struct sock *sk, struct sk_psock *psock)
 	int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
 	int config = psock->progs.msg_parser   ? TCP_BPF_TX   : TCP_BPF_BASE;
 
-	if (!psock->sk_proto) {
-		struct proto *ops = READ_ONCE(sk->sk_prot);
-
-		if (tcp_bpf_assert_proto_ops(ops))
+	if (sk->sk_family == AF_INET6) {
+		if (tcp_bpf_assert_proto_ops(psock->sk_proto))
 			return ERR_PTR(-EINVAL);
 
-		tcp_bpf_check_v6_needs_rebuild(sk, ops);
+		tcp_bpf_check_v6_needs_rebuild(psock->sk_proto);
 	}
 
 	return &tcp_bpf_prots[family][config];
diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
index eddd973e6575..7a94791efc1a 100644
--- a/net/ipv4/udp_bpf.c
+++ b/net/ipv4/udp_bpf.c
@@ -22,10 +22,9 @@ static void udp_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
 	prot->close  = sock_map_close;
 }
 
-static void udp_bpf_check_v6_needs_rebuild(struct sock *sk, struct proto *ops)
+static void udp_bpf_check_v6_needs_rebuild(struct proto *ops)
 {
-	if (sk->sk_family == AF_INET6 &&
-	    unlikely(ops != smp_load_acquire(&udpv6_prot_saved))) {
+	if (unlikely(ops != smp_load_acquire(&udpv6_prot_saved))) {
 		spin_lock_bh(&udpv6_prot_lock);
 		if (likely(ops != udpv6_prot_saved)) {
 			udp_bpf_rebuild_protos(&udp_bpf_prots[UDP_BPF_IPV6], ops);
@@ -46,8 +45,8 @@ struct proto *udp_bpf_get_proto(struct sock *sk, struct sk_psock *psock)
 {
 	int family = sk->sk_family == AF_INET ? UDP_BPF_IPV4 : UDP_BPF_IPV6;
 
-	if (!psock->sk_proto)
-		udp_bpf_check_v6_needs_rebuild(sk, READ_ONCE(sk->sk_prot));
+	if (sk->sk_family == AF_INET6)
+		udp_bpf_check_v6_needs_rebuild(psock->sk_proto);
 
 	return &udp_bpf_prots[family];
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
  2020-08-19  9:24 ` [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 15:48   ` Jakub Kicinski
                     ` (2 more replies)
  2020-08-19  9:24 ` [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly Lorenz Bauer
                   ` (3 subsequent siblings)
  5 siblings, 3 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Daniel Borkmann, Lorenz Bauer,
	David S. Miller, Jakub Kicinski, Alexei Starovoitov
  Cc: kernel-team, netdev, bpf, linux-kernel

Merge the two very similar functions sock_map_update_elem and
sock_hash_update_elem into one.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 net/core/sock_map.c | 53 ++++++++-------------------------------------
 1 file changed, 9 insertions(+), 44 deletions(-)

diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index abe4bac40db9..f464a0ebc871 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -559,10 +559,12 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
 	return false;
 }
 
-static int sock_map_update_elem(struct bpf_map *map, void *key,
-				void *value, u64 flags)
+static int sock_hash_update_common(struct bpf_map *map, void *key,
+				   struct sock *sk, u64 flags);
+
+int sock_map_update_elem(struct bpf_map *map, void *key,
+			 void *value, u64 flags)
 {
-	u32 idx = *(u32 *)key;
 	struct socket *sock;
 	struct sock *sk;
 	int ret;
@@ -591,8 +593,10 @@ static int sock_map_update_elem(struct bpf_map *map, void *key,
 	sock_map_sk_acquire(sk);
 	if (!sock_map_sk_state_allowed(sk))
 		ret = -EOPNOTSUPP;
+	else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
+		ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
 	else
-		ret = sock_map_update_common(map, idx, sk, flags);
+		ret = sock_hash_update_common(map, key, sk, flags);
 	sock_map_sk_release(sk);
 out:
 	fput(sock->file);
@@ -909,45 +913,6 @@ static int sock_hash_update_common(struct bpf_map *map, void *key,
 	return ret;
 }
 
-static int sock_hash_update_elem(struct bpf_map *map, void *key,
-				 void *value, u64 flags)
-{
-	struct socket *sock;
-	struct sock *sk;
-	int ret;
-	u64 ufd;
-
-	if (map->value_size == sizeof(u64))
-		ufd = *(u64 *)value;
-	else
-		ufd = *(u32 *)value;
-	if (ufd > S32_MAX)
-		return -EINVAL;
-
-	sock = sockfd_lookup(ufd, &ret);
-	if (!sock)
-		return ret;
-	sk = sock->sk;
-	if (!sk) {
-		ret = -EINVAL;
-		goto out;
-	}
-	if (!sock_map_sk_is_suitable(sk)) {
-		ret = -EOPNOTSUPP;
-		goto out;
-	}
-
-	sock_map_sk_acquire(sk);
-	if (!sock_map_sk_state_allowed(sk))
-		ret = -EOPNOTSUPP;
-	else
-		ret = sock_hash_update_common(map, key, sk, flags);
-	sock_map_sk_release(sk);
-out:
-	fput(sock->file);
-	return ret;
-}
-
 static int sock_hash_get_next_key(struct bpf_map *map, void *key,
 				  void *key_next)
 {
@@ -1216,7 +1181,7 @@ const struct bpf_map_ops sock_hash_ops = {
 	.map_alloc		= sock_hash_alloc,
 	.map_free		= sock_hash_free,
 	.map_get_next_key	= sock_hash_get_next_key,
-	.map_update_elem	= sock_hash_update_elem,
+	.map_update_elem	= sock_map_update_elem,
 	.map_delete_elem	= sock_hash_delete_elem,
 	.map_lookup_elem	= sock_hash_lookup,
 	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
  2020-08-19  9:24 ` [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization Lorenz Bauer
  2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 19:15   ` Yonghong Song
  2020-08-19 20:29   ` John Fastabend
  2020-08-19  9:24 ` [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash Lorenz Bauer
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann,
	Lorenz Bauer, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel

Don't go via map->ops to call sock_map_update_elem, since we know
what function to call in bpf_map_update_value. Since
check_map_func_compatibility doesn't allow calling
BPF_FUNC_map_update_elem from BPF context, we can remove
ops->map_update_elem and rename the function to
sock_map_update_elem_sys.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 include/linux/bpf.h  | 7 +++++++
 kernel/bpf/syscall.c | 5 +++--
 net/core/sock_map.c  | 6 ++----
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cef4ef0d2b4e..cf3416d1b8c2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1635,6 +1635,7 @@ int sock_map_prog_update(struct bpf_map *map, struct bpf_prog *prog,
 			 struct bpf_prog *old, u32 which);
 int sock_map_get_from_fd(const union bpf_attr *attr, struct bpf_prog *prog);
 int sock_map_prog_detach(const union bpf_attr *attr, enum bpf_prog_type ptype);
+int sock_map_update_elem_sys(struct bpf_map *map, void *key, void *value, u64 flags);
 void sock_map_unhash(struct sock *sk);
 void sock_map_close(struct sock *sk, long timeout);
 #else
@@ -1656,6 +1657,12 @@ static inline int sock_map_prog_detach(const union bpf_attr *attr,
 {
 	return -EOPNOTSUPP;
 }
+
+static inline int sock_map_update_elem_sys(struct bpf_map *map, void *key, void *value,
+					   u64 flags)
+{
+	return -EOPNOTSUPP;
+}
 #endif /* CONFIG_BPF_STREAM_PARSER */
 
 #if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 2f343ce15747..5867cf615a3c 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -157,10 +157,11 @@ static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key,
 	if (bpf_map_is_dev_bound(map)) {
 		return bpf_map_offload_update_elem(map, key, value, flags);
 	} else if (map->map_type == BPF_MAP_TYPE_CPUMAP ||
-		   map->map_type == BPF_MAP_TYPE_SOCKHASH ||
-		   map->map_type == BPF_MAP_TYPE_SOCKMAP ||
 		   map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
 		return map->ops->map_update_elem(map, key, value, flags);
+	} else if (map->map_type == BPF_MAP_TYPE_SOCKHASH ||
+		   map->map_type == BPF_MAP_TYPE_SOCKMAP) {
+		return sock_map_update_elem_sys(map, key, value, flags);
 	} else if (IS_FD_PROG_ARRAY(map)) {
 		return bpf_fd_array_map_update_elem(map, f.file, key, value,
 						    flags);
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index f464a0ebc871..018367fb889f 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -562,8 +562,8 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
 static int sock_hash_update_common(struct bpf_map *map, void *key,
 				   struct sock *sk, u64 flags);
 
-int sock_map_update_elem(struct bpf_map *map, void *key,
-			 void *value, u64 flags)
+int sock_map_update_elem_sys(struct bpf_map *map, void *key,
+			     void *value, u64 flags)
 {
 	struct socket *sock;
 	struct sock *sk;
@@ -687,7 +687,6 @@ const struct bpf_map_ops sock_map_ops = {
 	.map_free		= sock_map_free,
 	.map_get_next_key	= sock_map_get_next_key,
 	.map_lookup_elem_sys_only = sock_map_lookup_sys,
-	.map_update_elem	= sock_map_update_elem,
 	.map_delete_elem	= sock_map_delete_elem,
 	.map_lookup_elem	= sock_map_lookup,
 	.map_release_uref	= sock_map_release_progs,
@@ -1181,7 +1180,6 @@ const struct bpf_map_ops sock_hash_ops = {
 	.map_alloc		= sock_hash_alloc,
 	.map_free		= sock_hash_free,
 	.map_get_next_key	= sock_hash_get_next_key,
-	.map_update_elem	= sock_map_update_elem,
 	.map_delete_elem	= sock_hash_delete_elem,
 	.map_lookup_elem	= sock_hash_lookup,
 	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
                   ` (2 preceding siblings ...)
  2020-08-19  9:24 ` [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 20:13   ` Yonghong Song
  2020-08-19 20:51   ` John Fastabend
  2020-08-19  9:24 ` [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF Lorenz Bauer
  2020-08-19  9:24 ` [PATCH bpf-next 6/6] selftests: bpf: test sockmap " Lorenz Bauer
  5 siblings, 2 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, Lorenz Bauer, netdev, bpf, linux-kernel

The verifier assumes that map values are simple blobs of memory, and
therefore treats ARG_PTR_TO_MAP_VALUE, etc. as such. However, there are
map types where this isn't true. For example, sockmap and sockhash store
sockets. In general this isn't a big problem: we can just
write helpers that explicitly requests PTR_TO_SOCKET instead of
ARG_PTR_TO_MAP_VALUE.

The one exception are the standard map helpers like map_update_elem,
map_lookup_elem, etc. Here it would be nice we could overload the
function prototype for different kinds of maps. Unfortunately, this
isn't entirely straight forward:
We only know the type of the map once we have resolved meta->map_ptr
in check_func_arg. This means we can't swap out the prototype
in check_helper_call until we're half way through the function.

Instead, modify check_func_arg to treat ARG_PTR_TO_MAP_VALUE* to
mean "the native type for the map" instead of "pointer to memory"
for sockmap and sockhash. This means we don't have to modify the
function prototype at all

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 kernel/bpf/verifier.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b6ccfce3bf4c..47f9b94bb9d4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3872,6 +3872,38 @@ static int int_ptr_type_to_size(enum bpf_arg_type type)
 	return -EINVAL;
 }
 
+static int override_map_arg_type(struct bpf_verifier_env *env,
+				 const struct bpf_call_arg_meta *meta,
+				 enum bpf_arg_type *arg_type)
+{
+	if (!meta->map_ptr) {
+		/* kernel subsystem misconfigured verifier */
+		verbose(env, "invalid map_ptr to access map->type\n");
+		return -EACCES;
+	}
+
+	switch (meta->map_ptr->map_type) {
+	case BPF_MAP_TYPE_SOCKMAP:
+	case BPF_MAP_TYPE_SOCKHASH:
+		switch (*arg_type) {
+		case ARG_PTR_TO_MAP_VALUE:
+			*arg_type = ARG_PTR_TO_SOCKET;
+			break;
+		case ARG_PTR_TO_MAP_VALUE_OR_NULL:
+			*arg_type = ARG_PTR_TO_SOCKET_OR_NULL;
+			break;
+		default:
+			verbose(env, "invalid arg_type for sockmap/sockhash\n");
+			return -EINVAL;
+		}
+		break;
+
+	default:
+		break;
+	}
+	return 0;
+}
+
 static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			  struct bpf_call_arg_meta *meta,
 			  const struct bpf_func_proto *fn)
@@ -3904,6 +3936,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		return -EACCES;
 	}
 
+	if (arg_type == ARG_PTR_TO_MAP_VALUE ||
+	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
+	    arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {
+		err = override_map_arg_type(env, meta, &arg_type);
+		if (err)
+			return err;
+	}
+
 	if (arg_type == ARG_PTR_TO_MAP_KEY ||
 	    arg_type == ARG_PTR_TO_MAP_VALUE ||
 	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
                   ` (3 preceding siblings ...)
  2020-08-19  9:24 ` [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 20:35   ` Yonghong Song
  2020-08-19 21:22   ` John Fastabend
  2020-08-19  9:24 ` [PATCH bpf-next 6/6] selftests: bpf: test sockmap " Lorenz Bauer
  5 siblings, 2 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann,
	Lorenz Bauer, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel

Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
context. The synchronization required for this is a bit fiddly: we
need to prevent the socket from changing it's state while we add it
to the sockmap, since we rely on getting a callback via
sk_prot->unhash. However, we can't just lock_sock like in
sock_map_sk_acquire because that might sleep. So instead we disable
softirq processing and use bh_lock_sock to prevent further
modification.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 kernel/bpf/verifier.c |  6 ++++--
 net/core/sock_map.c   | 24 ++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 47f9b94bb9d4..421fccf18dea 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4254,7 +4254,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 		    func_id != BPF_FUNC_map_delete_elem &&
 		    func_id != BPF_FUNC_msg_redirect_map &&
 		    func_id != BPF_FUNC_sk_select_reuseport &&
-		    func_id != BPF_FUNC_map_lookup_elem)
+		    func_id != BPF_FUNC_map_lookup_elem &&
+		    func_id != BPF_FUNC_map_update_elem)
 			goto error;
 		break;
 	case BPF_MAP_TYPE_SOCKHASH:
@@ -4263,7 +4264,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
 		    func_id != BPF_FUNC_map_delete_elem &&
 		    func_id != BPF_FUNC_msg_redirect_hash &&
 		    func_id != BPF_FUNC_sk_select_reuseport &&
-		    func_id != BPF_FUNC_map_lookup_elem)
+		    func_id != BPF_FUNC_map_lookup_elem &&
+		    func_id != BPF_FUNC_map_update_elem)
 			goto error;
 		break;
 	case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
diff --git a/net/core/sock_map.c b/net/core/sock_map.c
index 018367fb889f..b2c886c34566 100644
--- a/net/core/sock_map.c
+++ b/net/core/sock_map.c
@@ -603,6 +603,28 @@ int sock_map_update_elem_sys(struct bpf_map *map, void *key,
 	return ret;
 }
 
+static int sock_map_update_elem(struct bpf_map *map, void *key,
+				void *value, u64 flags)
+{
+	struct sock *sk = (struct sock *)value;
+	int ret;
+
+	if (!sock_map_sk_is_suitable(sk))
+		return -EOPNOTSUPP;
+
+	local_bh_disable();
+	bh_lock_sock(sk);
+	if (!sock_map_sk_state_allowed(sk))
+		ret = -EOPNOTSUPP;
+	else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
+		ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
+	else
+		ret = sock_hash_update_common(map, key, sk, flags);
+	bh_unlock_sock(sk);
+	local_bh_enable();
+	return ret;
+}
+
 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, sops,
 	   struct bpf_map *, map, void *, key, u64, flags)
 {
@@ -687,6 +709,7 @@ const struct bpf_map_ops sock_map_ops = {
 	.map_free		= sock_map_free,
 	.map_get_next_key	= sock_map_get_next_key,
 	.map_lookup_elem_sys_only = sock_map_lookup_sys,
+	.map_update_elem	= sock_map_update_elem,
 	.map_delete_elem	= sock_map_delete_elem,
 	.map_lookup_elem	= sock_map_lookup,
 	.map_release_uref	= sock_map_release_progs,
@@ -1180,6 +1203,7 @@ const struct bpf_map_ops sock_hash_ops = {
 	.map_alloc		= sock_hash_alloc,
 	.map_free		= sock_hash_free,
 	.map_get_next_key	= sock_hash_get_next_key,
+	.map_update_elem	= sock_map_update_elem,
 	.map_delete_elem	= sock_hash_delete_elem,
 	.map_lookup_elem	= sock_hash_lookup,
 	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH bpf-next 6/6] selftests: bpf: test sockmap update from BPF
  2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
                   ` (4 preceding siblings ...)
  2020-08-19  9:24 ` [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF Lorenz Bauer
@ 2020-08-19  9:24 ` Lorenz Bauer
  2020-08-19 20:45   ` Yonghong Song
  5 siblings, 1 reply; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-19  9:24 UTC (permalink / raw)
  To: jakub, john.fastabend, Shuah Khan, Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, Lorenz Bauer, linux-kernel, linux-kselftest, netdev, bpf

Add a test which copies a socket from a sockmap into another sockmap
or sockhash. This excercises bpf_map_update_elem support from BPF
context. Compare the socket cookies from source and destination to
ensure that the copy succeeded.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
---
 .../selftests/bpf/prog_tests/sockmap_basic.c  | 76 +++++++++++++++++++
 .../selftests/bpf/progs/test_sockmap_copy.c   | 48 ++++++++++++
 2 files changed, 124 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c

diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index 96e7b7f84c65..d30cabc00e9e 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -4,6 +4,7 @@
 
 #include "test_progs.h"
 #include "test_skmsg_load_helpers.skel.h"
+#include "test_sockmap_copy.skel.h"
 
 #define TCP_REPAIR		19	/* TCP sock is under repair right now */
 
@@ -101,6 +102,77 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
 	test_skmsg_load_helpers__destroy(skel);
 }
 
+static void test_sockmap_copy(enum bpf_map_type map_type)
+{
+	struct bpf_prog_test_run_attr attr;
+	struct test_sockmap_copy *skel;
+	__u64 src_cookie, dst_cookie;
+	int err, prog, s, src, dst;
+	const __u32 zero = 0;
+	char dummy[14] = {0};
+
+	s = connected_socket_v4();
+	if (CHECK_FAIL(s == -1))
+		return;
+
+	skel = test_sockmap_copy__open_and_load();
+	if (CHECK_FAIL(!skel)) {
+		close(s);
+		perror("test_sockmap_copy__open_and_load");
+		return;
+	}
+
+	prog = bpf_program__fd(skel->progs.copy_sock_map);
+	src = bpf_map__fd(skel->maps.src);
+	if (map_type == BPF_MAP_TYPE_SOCKMAP)
+		dst = bpf_map__fd(skel->maps.dst_sock_map);
+	else
+		dst = bpf_map__fd(skel->maps.dst_sock_hash);
+
+	err = bpf_map_update_elem(src, &zero, &s, BPF_NOEXIST);
+	if (CHECK_FAIL(err)) {
+		perror("bpf_map_update");
+		goto out;
+	}
+
+	err = bpf_map_lookup_elem(src, &zero, &src_cookie);
+	if (CHECK_FAIL(err)) {
+		perror("bpf_map_lookup_elem(src)");
+		goto out;
+	}
+
+	attr = (struct bpf_prog_test_run_attr){
+		.prog_fd = prog,
+		.repeat = 1,
+		.data_in = dummy,
+		.data_size_in = sizeof(dummy),
+	};
+
+	err = bpf_prog_test_run_xattr(&attr);
+	if (err) {
+		test__fail();
+		perror("bpf_prog_test_run");
+		goto out;
+	} else if (!attr.retval) {
+		PRINT_FAIL("bpf_prog_test_run: program returned %u\n",
+			   attr.retval);
+		goto out;
+	}
+
+	err = bpf_map_lookup_elem(dst, &zero, &dst_cookie);
+	if (CHECK_FAIL(err)) {
+		perror("bpf_map_lookup_elem(dst)");
+		goto out;
+	}
+
+	if (dst_cookie != src_cookie)
+		PRINT_FAIL("cookie %llu != %llu\n", dst_cookie, src_cookie);
+
+out:
+	close(s);
+	test_sockmap_copy__destroy(skel);
+}
+
 void test_sockmap_basic(void)
 {
 	if (test__start_subtest("sockmap create_update_free"))
@@ -111,4 +183,8 @@ void test_sockmap_basic(void)
 		test_skmsg_helpers(BPF_MAP_TYPE_SOCKMAP);
 	if (test__start_subtest("sockhash sk_msg load helpers"))
 		test_skmsg_helpers(BPF_MAP_TYPE_SOCKHASH);
+	if (test__start_subtest("sockmap copy"))
+		test_sockmap_copy(BPF_MAP_TYPE_SOCKMAP);
+	if (test__start_subtest("sockhash copy"))
+		test_sockmap_copy(BPF_MAP_TYPE_SOCKHASH);
 }
diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_copy.c b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
new file mode 100644
index 000000000000..9d0c9f28cab2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2020 Cloudflare
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u64);
+} src SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKMAP);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u64);
+} dst_sock_map SEC(".maps");
+
+struct {
+	__uint(type, BPF_MAP_TYPE_SOCKHASH);
+	__uint(max_entries, 1);
+	__type(key, __u32);
+	__type(value, __u64);
+} dst_sock_hash SEC(".maps");
+
+SEC("classifier/copy_sock_map")
+int copy_sock_map(void *ctx)
+{
+	struct bpf_sock *sk;
+	bool failed = false;
+	__u32 key = 0;
+
+	sk = bpf_map_lookup_elem(&src, &key);
+	if (!sk)
+		return SK_DROP;
+
+	if (bpf_map_update_elem(&dst_sock_map, &key, sk, 0))
+		failed = true;
+
+	if (bpf_map_update_elem(&dst_sock_hash, &key, sk, 0))
+		failed = true;
+
+	bpf_sk_release(sk);
+	return failed ? SK_DROP : SK_PASS;
+}
+
+char _license[] SEC("license") = "GPL";
-- 
2.25.1


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions
  2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
@ 2020-08-19 15:48   ` Jakub Kicinski
  2020-08-19 18:50   ` Yonghong Song
  2020-08-19 20:11   ` John Fastabend
  2 siblings, 0 replies; 25+ messages in thread
From: Jakub Kicinski @ 2020-08-19 15:48 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: jakub, john.fastabend, Daniel Borkmann, David S. Miller,
	Alexei Starovoitov, kernel-team, netdev, bpf, linux-kernel

On Wed, 19 Aug 2020 10:24:32 +0100 Lorenz Bauer wrote:
> --- a/net/core/sock_map.c
> +++ b/net/core/sock_map.c
> @@ -559,10 +559,12 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
>  	return false;
>  }
>  
> -static int sock_map_update_elem(struct bpf_map *map, void *key,
> -				void *value, u64 flags)
> +static int sock_hash_update_common(struct bpf_map *map, void *key,
> +				   struct sock *sk, u64 flags);
> +
> +int sock_map_update_elem(struct bpf_map *map, void *key,
> +			 void *value, u64 flags)
>  {
> -	u32 idx = *(u32 *)key;
>  	struct socket *sock;
>  	struct sock *sk;
>  	int ret;

net/core/sock_map.c:565:5: warning: no previous prototype for ‘sock_map_update_elem’ [-Wmissing-prototypes]
  565 | int sock_map_update_elem(struct bpf_map *map, void *key,
      |     ^~~~~~~~~~~~~~~~~~~~

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions
  2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
  2020-08-19 15:48   ` Jakub Kicinski
@ 2020-08-19 18:50   ` Yonghong Song
  2020-08-19 20:11   ` John Fastabend
  2 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2020-08-19 18:50 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Daniel Borkmann,
	David S. Miller, Jakub Kicinski, Alexei Starovoitov
  Cc: kernel-team, netdev, bpf, linux-kernel



On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> Merge the two very similar functions sock_map_update_elem and
> sock_hash_update_elem into one.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly
  2020-08-19  9:24 ` [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly Lorenz Bauer
@ 2020-08-19 19:15   ` Yonghong Song
  2020-08-19 20:29   ` John Fastabend
  1 sibling, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2020-08-19 19:15 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov,
	Daniel Borkmann, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel



On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> Don't go via map->ops to call sock_map_update_elem, since we know
> what function to call in bpf_map_update_value. Since
> check_map_func_compatibility doesn't allow calling
> BPF_FUNC_map_update_elem from BPF context, we can remove
> ops->map_update_elem and rename the function to
> sock_map_update_elem_sys.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization
  2020-08-19  9:24 ` [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization Lorenz Bauer
@ 2020-08-19 20:05   ` John Fastabend
  0 siblings, 0 replies; 25+ messages in thread
From: John Fastabend @ 2020-08-19 20:05 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Daniel Borkmann,
	Lorenz Bauer, David S. Miller, Jakub Kicinski, Eric Dumazet,
	Alexey Kuznetsov, Hideaki YOSHIFUJI, Alexei Starovoitov
  Cc: kernel-team, netdev, bpf, linux-kernel

Lorenz Bauer wrote:
> Initializing psock->sk_proto and other saved callbacks is only
> done in sk_psock_update_proto, after sk_psock_init has returned.
> The logic for this is difficult to follow, and needlessly complex.
> 
> Instead, initialize psock->sk_proto whenever we allocate a new
> psock. Additionally, assert the following invariants:
> 
> * The SK has no ULP: ULP does it's own finagling of sk->sk_prot
> * sk_user_data is unused: we need it to store sk_psock
> 
> Protect our access to sk_user_data with sk_callback_lock, which
> is what other users like reuseport arrays, etc. do.
> 
> The result is that an sk_psock is always fully initialized, and
> that psock->sk_proto is always the "original" struct proto.
> The latter allows us to use psock->sk_proto when initializing
> IPv6 TCP / UDP callbacks for sockmap.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---

Looks like a nice bit of cleanup thanks. I think we might be able to fold
up more of this code as well, but I'll take a look in a bit.

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions
  2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
  2020-08-19 15:48   ` Jakub Kicinski
  2020-08-19 18:50   ` Yonghong Song
@ 2020-08-19 20:11   ` John Fastabend
  2 siblings, 0 replies; 25+ messages in thread
From: John Fastabend @ 2020-08-19 20:11 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Daniel Borkmann,
	Lorenz Bauer, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov
  Cc: kernel-team, netdev, bpf, linux-kernel

Lorenz Bauer wrote:
> Merge the two very similar functions sock_map_update_elem and
> sock_hash_update_elem into one.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>  net/core/sock_map.c | 53 ++++++++-------------------------------------
>  1 file changed, 9 insertions(+), 44 deletions(-)
> 

Fixup the warning, but otherwise

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash
  2020-08-19  9:24 ` [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash Lorenz Bauer
@ 2020-08-19 20:13   ` Yonghong Song
  2020-08-20 12:33     ` Lorenz Bauer
  2020-08-19 20:51   ` John Fastabend
  1 sibling, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2020-08-19 20:13 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, netdev, bpf, linux-kernel



On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> The verifier assumes that map values are simple blobs of memory, and
> therefore treats ARG_PTR_TO_MAP_VALUE, etc. as such. However, there are
> map types where this isn't true. For example, sockmap and sockhash store
> sockets. In general this isn't a big problem: we can just
> write helpers that explicitly requests PTR_TO_SOCKET instead of
> ARG_PTR_TO_MAP_VALUE.
> 
> The one exception are the standard map helpers like map_update_elem,
> map_lookup_elem, etc. Here it would be nice we could overload the
> function prototype for different kinds of maps. Unfortunately, this
> isn't entirely straight forward:
> We only know the type of the map once we have resolved meta->map_ptr
> in check_func_arg. This means we can't swap out the prototype
> in check_helper_call until we're half way through the function.
> 
> Instead, modify check_func_arg to treat ARG_PTR_TO_MAP_VALUE* to
> mean "the native type for the map" instead of "pointer to memory"
> for sockmap and sockhash. This means we don't have to modify the
> function prototype at all
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>   kernel/bpf/verifier.c | 40 ++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 40 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b6ccfce3bf4c..47f9b94bb9d4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3872,6 +3872,38 @@ static int int_ptr_type_to_size(enum bpf_arg_type type)
>   	return -EINVAL;
>   }
>   
> +static int override_map_arg_type(struct bpf_verifier_env *env,
> +				 const struct bpf_call_arg_meta *meta,
> +				 enum bpf_arg_type *arg_type)
> +{
> +	if (!meta->map_ptr) {
> +		/* kernel subsystem misconfigured verifier */
> +		verbose(env, "invalid map_ptr to access map->type\n");
> +		return -EACCES;
> +	}
> +
> +	switch (meta->map_ptr->map_type) {
> +	case BPF_MAP_TYPE_SOCKMAP:
> +	case BPF_MAP_TYPE_SOCKHASH:
> +		switch (*arg_type) {
> +		case ARG_PTR_TO_MAP_VALUE:
> +			*arg_type = ARG_PTR_TO_SOCKET;
> +			break;
> +		case ARG_PTR_TO_MAP_VALUE_OR_NULL:
> +			*arg_type = ARG_PTR_TO_SOCKET_OR_NULL;
> +			break;
> +		default:
> +			verbose(env, "invalid arg_type for sockmap/sockhash\n");
> +			return -EINVAL;
> +		}
> +		break;
> +
> +	default:
> +		break;
> +	}
> +	return 0;
> +}
> +
>   static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>   			  struct bpf_call_arg_meta *meta,
>   			  const struct bpf_func_proto *fn)
> @@ -3904,6 +3936,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>   		return -EACCES;
>   	}
>   
> +	if (arg_type == ARG_PTR_TO_MAP_VALUE ||
> +	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> +	    arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {

We probably do not need ARG_PTR_TO_UNINIT_MAP_VALUE here.

Do we need ARG_PTR_TO_MAP_VALUE_OR_NULL? bpf_map_update_elem arg type
is ARG_PTR_TO_MAP_VALUE.

> +		err = override_map_arg_type(env, meta, &arg_type);
> +		if (err)
> +			return err;
> +	}
> +
>   	if (arg_type == ARG_PTR_TO_MAP_KEY ||
>   	    arg_type == ARG_PTR_TO_MAP_VALUE ||
>   	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly
  2020-08-19  9:24 ` [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly Lorenz Bauer
  2020-08-19 19:15   ` Yonghong Song
@ 2020-08-19 20:29   ` John Fastabend
  1 sibling, 0 replies; 25+ messages in thread
From: John Fastabend @ 2020-08-19 20:29 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov,
	Daniel Borkmann, Lorenz Bauer, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel

Lorenz Bauer wrote:
> Don't go via map->ops to call sock_map_update_elem, since we know
> what function to call in bpf_map_update_value. Since
> check_map_func_compatibility doesn't allow calling
> BPF_FUNC_map_update_elem from BPF context, we can remove
> ops->map_update_elem and rename the function to
> sock_map_update_elem_sys.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-19  9:24 ` [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF Lorenz Bauer
@ 2020-08-19 20:35   ` Yonghong Song
  2020-08-19 21:22   ` John Fastabend
  1 sibling, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2020-08-19 20:35 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov,
	Daniel Borkmann, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel



On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
> context. The synchronization required for this is a bit fiddly: we
> need to prevent the socket from changing it's state while we add it
> to the sockmap, since we rely on getting a callback via
> sk_prot->unhash. However, we can't just lock_sock like in
> sock_map_sk_acquire because that might sleep. So instead we disable
> softirq processing and use bh_lock_sock to prevent further
> modification.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>

Acked-by: Yonghong Song <yhs@fb.com>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 6/6] selftests: bpf: test sockmap update from BPF
  2020-08-19  9:24 ` [PATCH bpf-next 6/6] selftests: bpf: test sockmap " Lorenz Bauer
@ 2020-08-19 20:45   ` Yonghong Song
  2020-08-20 11:58     ` Lorenz Bauer
  0 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2020-08-19 20:45 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Shuah Khan,
	Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, linux-kernel, linux-kselftest, netdev, bpf



On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> Add a test which copies a socket from a sockmap into another sockmap
> or sockhash. This excercises bpf_map_update_elem support from BPF
> context. Compare the socket cookies from source and destination to
> ensure that the copy succeeded.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>   .../selftests/bpf/prog_tests/sockmap_basic.c  | 76 +++++++++++++++++++
>   .../selftests/bpf/progs/test_sockmap_copy.c   | 48 ++++++++++++
>   2 files changed, 124 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> 
> diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> index 96e7b7f84c65..d30cabc00e9e 100644
> --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> @@ -4,6 +4,7 @@
>   
>   #include "test_progs.h"
>   #include "test_skmsg_load_helpers.skel.h"
> +#include "test_sockmap_copy.skel.h"
>   
>   #define TCP_REPAIR		19	/* TCP sock is under repair right now */
>   
> @@ -101,6 +102,77 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
>   	test_skmsg_load_helpers__destroy(skel);
>   }
>   
> +static void test_sockmap_copy(enum bpf_map_type map_type)
> +{
> +	struct bpf_prog_test_run_attr attr;
> +	struct test_sockmap_copy *skel;
> +	__u64 src_cookie, dst_cookie;
> +	int err, prog, s, src, dst;
> +	const __u32 zero = 0;
> +	char dummy[14] = {0};
> +
> +	s = connected_socket_v4();

Maybe change variable name to "sk" for better clarity?

> +	if (CHECK_FAIL(s == -1))
> +		return;
> +
> +	skel = test_sockmap_copy__open_and_load();
> +	if (CHECK_FAIL(!skel)) {
> +		close(s);
> +		perror("test_sockmap_copy__open_and_load");
> +		return;
> +	}

Could you use CHECK instead of CHECK_FAIL?
With CHECK, you can print additional information without perror.


> +
> +	prog = bpf_program__fd(skel->progs.copy_sock_map);
> +	src = bpf_map__fd(skel->maps.src);
> +	if (map_type == BPF_MAP_TYPE_SOCKMAP)
> +		dst = bpf_map__fd(skel->maps.dst_sock_map);
> +	else
> +		dst = bpf_map__fd(skel->maps.dst_sock_hash);
> +
> +	err = bpf_map_update_elem(src, &zero, &s, BPF_NOEXIST);

The map defined in bpf program is __u64 and here "s" is int.
Any potential issues?

> +	if (CHECK_FAIL(err)) {
> +		perror("bpf_map_update");
> +		goto out;
> +	}
> +
> +	err = bpf_map_lookup_elem(src, &zero, &src_cookie);
> +	if (CHECK_FAIL(err)) {
> +		perror("bpf_map_lookup_elem(src)");
> +		goto out;
> +	}
> +
> +	attr = (struct bpf_prog_test_run_attr){
> +		.prog_fd = prog,
> +		.repeat = 1,
> +		.data_in = dummy,
> +		.data_size_in = sizeof(dummy),
> +	};
> +
> +	err = bpf_prog_test_run_xattr(&attr);
> +	if (err) {

You can use CHECK macro here.

> +		test__fail();
> +		perror("bpf_prog_test_run");
> +		goto out;
> +	} else if (!attr.retval) {
> +		PRINT_FAIL("bpf_prog_test_run: program returned %u\n",
> +			   attr.retval);
> +		goto out;
> +	}
> +
> +	err = bpf_map_lookup_elem(dst, &zero, &dst_cookie);
> +	if (CHECK_FAIL(err)) {
> +		perror("bpf_map_lookup_elem(dst)");
> +		goto out;
> +	}
> +
> +	if (dst_cookie != src_cookie)
> +		PRINT_FAIL("cookie %llu != %llu\n", dst_cookie, src_cookie);

Just replace the whole if statement with a CHECK macro.

> +
> +out:
> +	close(s);
> +	test_sockmap_copy__destroy(skel);
> +}
> +
>   void test_sockmap_basic(void)
>   {
>   	if (test__start_subtest("sockmap create_update_free"))
> @@ -111,4 +183,8 @@ void test_sockmap_basic(void)
>   		test_skmsg_helpers(BPF_MAP_TYPE_SOCKMAP);
>   	if (test__start_subtest("sockhash sk_msg load helpers"))
>   		test_skmsg_helpers(BPF_MAP_TYPE_SOCKHASH);
> +	if (test__start_subtest("sockmap copy"))
> +		test_sockmap_copy(BPF_MAP_TYPE_SOCKMAP);
> +	if (test__start_subtest("sockhash copy"))
> +		test_sockmap_copy(BPF_MAP_TYPE_SOCKHASH);
>   }
> diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_copy.c b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> new file mode 100644
> index 000000000000..9d0c9f28cab2
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0
> +// Copyright (c) 2020 Cloudflare
> +#include "vmlinux.h"
> +#include <bpf/bpf_helpers.h>
> +
> +struct {
> +	__uint(type, BPF_MAP_TYPE_SOCKMAP);
> +	__uint(max_entries, 1);
> +	__type(key, __u32);
> +	__type(value, __u64);
> +} src SEC(".maps");
> +
> +struct {
> +	__uint(type, BPF_MAP_TYPE_SOCKMAP);
> +	__uint(max_entries, 1);
> +	__type(key, __u32);
> +	__type(value, __u64);
> +} dst_sock_map SEC(".maps");
> +
> +struct {
> +	__uint(type, BPF_MAP_TYPE_SOCKHASH);
> +	__uint(max_entries, 1);
> +	__type(key, __u32);
> +	__type(value, __u64);
> +} dst_sock_hash SEC(".maps");
> +
> +SEC("classifier/copy_sock_map")
> +int copy_sock_map(void *ctx)
> +{
> +	struct bpf_sock *sk;
> +	bool failed = false;
> +	__u32 key = 0;
> +
> +	sk = bpf_map_lookup_elem(&src, &key);
> +	if (!sk)
> +		return SK_DROP;
> +
> +	if (bpf_map_update_elem(&dst_sock_map, &key, sk, 0))
> +		failed = true;
> +
> +	if (bpf_map_update_elem(&dst_sock_hash, &key, sk, 0))
> +		failed = true;
> +
> +	bpf_sk_release(sk);
> +	return failed ? SK_DROP : SK_PASS;
> +}
> +
> +char _license[] SEC("license") = "GPL";
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash
  2020-08-19  9:24 ` [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash Lorenz Bauer
  2020-08-19 20:13   ` Yonghong Song
@ 2020-08-19 20:51   ` John Fastabend
  1 sibling, 0 replies; 25+ messages in thread
From: John Fastabend @ 2020-08-19 20:51 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov, Daniel Borkmann
  Cc: kernel-team, Lorenz Bauer, netdev, bpf, linux-kernel

Lorenz Bauer wrote:
> The verifier assumes that map values are simple blobs of memory, and
> therefore treats ARG_PTR_TO_MAP_VALUE, etc. as such. However, there are
> map types where this isn't true. For example, sockmap and sockhash store
> sockets. In general this isn't a big problem: we can just
> write helpers that explicitly requests PTR_TO_SOCKET instead of
> ARG_PTR_TO_MAP_VALUE.
> 
> The one exception are the standard map helpers like map_update_elem,
> map_lookup_elem, etc. Here it would be nice we could overload the
> function prototype for different kinds of maps. Unfortunately, this
> isn't entirely straight forward:
> We only know the type of the map once we have resolved meta->map_ptr
> in check_func_arg. This means we can't swap out the prototype
> in check_helper_call until we're half way through the function.
> 
> Instead, modify check_func_arg to treat ARG_PTR_TO_MAP_VALUE* to
> mean "the native type for the map" instead of "pointer to memory"
> for sockmap and sockhash. This means we don't have to modify the
> function prototype at all
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>  kernel/bpf/verifier.c | 40 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 40 insertions(+)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index b6ccfce3bf4c..47f9b94bb9d4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3872,6 +3872,38 @@ static int int_ptr_type_to_size(enum bpf_arg_type type)
>  	return -EINVAL;
>  }
>  
> +static int override_map_arg_type(struct bpf_verifier_env *env,
> +				 const struct bpf_call_arg_meta *meta,
> +				 enum bpf_arg_type *arg_type)

One nit can we rename this to refine_map_arg_type or resolve_map_arg_type I
don't like the name "override" we are getting a more precise type here.

> +{
> +	if (!meta->map_ptr) {
> +		/* kernel subsystem misconfigured verifier */
> +		verbose(env, "invalid map_ptr to access map->type\n");
> +		return -EACCES;
> +	}
> +
> +	switch (meta->map_ptr->map_type) {
> +	case BPF_MAP_TYPE_SOCKMAP:
> +	case BPF_MAP_TYPE_SOCKHASH:
> +		switch (*arg_type) {
> +		case ARG_PTR_TO_MAP_VALUE:
> +			*arg_type = ARG_PTR_TO_SOCKET;
> +			break;
> +		case ARG_PTR_TO_MAP_VALUE_OR_NULL:
> +			*arg_type = ARG_PTR_TO_SOCKET_OR_NULL;
> +			break;
> +		default:
> +			verbose(env, "invalid arg_type for sockmap/sockhash\n");

Might be worth pushing the arg_type into the verbose message so its obvious
where the types went wrong. We will probably "know" just based on the
switch in front of this, but users in general wont. Just a suggestion if
you think its overkill go ahead and skip.

> +			return -EINVAL;
> +		}
> +		break;
> +
> +	default:
> +		break;
> +	}
> +	return 0;
> +}
> +

Otherwise LGTM.

>  static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  			  struct bpf_call_arg_meta *meta,
>  			  const struct bpf_func_proto *fn)
> @@ -3904,6 +3936,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  		return -EACCES;
>  	}
>  
> +	if (arg_type == ARG_PTR_TO_MAP_VALUE ||
> +	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> +	    arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {
> +		err = override_map_arg_type(env, meta, &arg_type);
> +		if (err)
> +			return err;
> +	}
> +
>  	if (arg_type == ARG_PTR_TO_MAP_KEY ||
>  	    arg_type == ARG_PTR_TO_MAP_VALUE ||
>  	    arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-19  9:24 ` [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF Lorenz Bauer
  2020-08-19 20:35   ` Yonghong Song
@ 2020-08-19 21:22   ` John Fastabend
  2020-08-19 22:41     ` John Fastabend
  1 sibling, 1 reply; 25+ messages in thread
From: John Fastabend @ 2020-08-19 21:22 UTC (permalink / raw)
  To: Lorenz Bauer, jakub, john.fastabend, Alexei Starovoitov,
	Daniel Borkmann, Lorenz Bauer, David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel

Lorenz Bauer wrote:
> Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
> context. The synchronization required for this is a bit fiddly: we
> need to prevent the socket from changing it's state while we add it
> to the sockmap, since we rely on getting a callback via
> sk_prot->unhash. However, we can't just lock_sock like in
> sock_map_sk_acquire because that might sleep. So instead we disable
> softirq processing and use bh_lock_sock to prevent further
> modification.
> 
> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> ---
>  kernel/bpf/verifier.c |  6 ++++--
>  net/core/sock_map.c   | 24 ++++++++++++++++++++++++
>  2 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 47f9b94bb9d4..421fccf18dea 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -4254,7 +4254,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>  		    func_id != BPF_FUNC_map_delete_elem &&
>  		    func_id != BPF_FUNC_msg_redirect_map &&
>  		    func_id != BPF_FUNC_sk_select_reuseport &&
> -		    func_id != BPF_FUNC_map_lookup_elem)
> +		    func_id != BPF_FUNC_map_lookup_elem &&
> +		    func_id != BPF_FUNC_map_update_elem)
>  			goto error;
>  		break;
>  	case BPF_MAP_TYPE_SOCKHASH:
> @@ -4263,7 +4264,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>  		    func_id != BPF_FUNC_map_delete_elem &&
>  		    func_id != BPF_FUNC_msg_redirect_hash &&
>  		    func_id != BPF_FUNC_sk_select_reuseport &&
> -		    func_id != BPF_FUNC_map_lookup_elem)
> +		    func_id != BPF_FUNC_map_lookup_elem &&
> +		    func_id != BPF_FUNC_map_update_elem)

I lost track of a detail here, map_lookup_elem should return
PTR_TO_MAP_VALUE_OR_NULL but if we want to feed that back into
the map_update_elem() we need to return PTR_TO_SOCKET_OR_NULL
and then presumably have a null check to get a PTR_TO_SOCKET
type as expect.

Can we use the same logic for expected arg (previous patch) on the
ret_type. Or did I miss it:/ Need some coffee I guess.

>  			goto error;
>  		break;
>  	case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
> diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> index 018367fb889f..b2c886c34566 100644
> --- a/net/core/sock_map.c
> +++ b/net/core/sock_map.c
> @@ -603,6 +603,28 @@ int sock_map_update_elem_sys(struct bpf_map *map, void *key,
>  	return ret;
>  }
>  
> +static int sock_map_update_elem(struct bpf_map *map, void *key,
> +				void *value, u64 flags)
> +{
> +	struct sock *sk = (struct sock *)value;
> +	int ret;
> +
> +	if (!sock_map_sk_is_suitable(sk))
> +		return -EOPNOTSUPP;
> +
> +	local_bh_disable();
> +	bh_lock_sock(sk);

How do ensure we are not being called from some context which
already has the bh_lock_sock() held? It seems we can call map_update_elem()
from any context, kprobes, tc, xdp, etc.?

> +	if (!sock_map_sk_state_allowed(sk))
> +		ret = -EOPNOTSUPP;
> +	else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
> +		ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
> +	else
> +		ret = sock_hash_update_common(map, key, sk, flags);
> +	bh_unlock_sock(sk);
> +	local_bh_enable();
> +	return ret;
> +}
> +
>  BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, sops,
>  	   struct bpf_map *, map, void *, key, u64, flags)
>  {
> @@ -687,6 +709,7 @@ const struct bpf_map_ops sock_map_ops = {
>  	.map_free		= sock_map_free,
>  	.map_get_next_key	= sock_map_get_next_key,
>  	.map_lookup_elem_sys_only = sock_map_lookup_sys,
> +	.map_update_elem	= sock_map_update_elem,
>  	.map_delete_elem	= sock_map_delete_elem,
>  	.map_lookup_elem	= sock_map_lookup,
>  	.map_release_uref	= sock_map_release_progs,
> @@ -1180,6 +1203,7 @@ const struct bpf_map_ops sock_hash_ops = {
>  	.map_alloc		= sock_hash_alloc,
>  	.map_free		= sock_hash_free,
>  	.map_get_next_key	= sock_hash_get_next_key,
> +	.map_update_elem	= sock_map_update_elem,
>  	.map_delete_elem	= sock_hash_delete_elem,
>  	.map_lookup_elem	= sock_hash_lookup,
>  	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
> -- 
> 2.25.1
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-19 21:22   ` John Fastabend
@ 2020-08-19 22:41     ` John Fastabend
  2020-08-20 11:33       ` Lorenz Bauer
  0 siblings, 1 reply; 25+ messages in thread
From: John Fastabend @ 2020-08-19 22:41 UTC (permalink / raw)
  To: John Fastabend, Lorenz Bauer, jakub, john.fastabend,
	Alexei Starovoitov, Daniel Borkmann, Lorenz Bauer,
	David S. Miller, Jakub Kicinski
  Cc: kernel-team, netdev, bpf, linux-kernel

John Fastabend wrote:
> Lorenz Bauer wrote:
> > Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
> > context. The synchronization required for this is a bit fiddly: we
> > need to prevent the socket from changing it's state while we add it
> > to the sockmap, since we rely on getting a callback via
> > sk_prot->unhash. However, we can't just lock_sock like in
> > sock_map_sk_acquire because that might sleep. So instead we disable
> > softirq processing and use bh_lock_sock to prevent further
> > modification.
> > 
> > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > ---
> >  kernel/bpf/verifier.c |  6 ++++--
> >  net/core/sock_map.c   | 24 ++++++++++++++++++++++++
> >  2 files changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 47f9b94bb9d4..421fccf18dea 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -4254,7 +4254,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
> >  		    func_id != BPF_FUNC_map_delete_elem &&
> >  		    func_id != BPF_FUNC_msg_redirect_map &&
> >  		    func_id != BPF_FUNC_sk_select_reuseport &&
> > -		    func_id != BPF_FUNC_map_lookup_elem)
> > +		    func_id != BPF_FUNC_map_lookup_elem &&
> > +		    func_id != BPF_FUNC_map_update_elem)
> >  			goto error;
> >  		break;
> >  	case BPF_MAP_TYPE_SOCKHASH:
> > @@ -4263,7 +4264,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
> >  		    func_id != BPF_FUNC_map_delete_elem &&
> >  		    func_id != BPF_FUNC_msg_redirect_hash &&
> >  		    func_id != BPF_FUNC_sk_select_reuseport &&
> > -		    func_id != BPF_FUNC_map_lookup_elem)
> > +		    func_id != BPF_FUNC_map_lookup_elem &&
> > +		    func_id != BPF_FUNC_map_update_elem)
> 
> I lost track of a detail here, map_lookup_elem should return
> PTR_TO_MAP_VALUE_OR_NULL but if we want to feed that back into
> the map_update_elem() we need to return PTR_TO_SOCKET_OR_NULL
> and then presumably have a null check to get a PTR_TO_SOCKET
> type as expect.
> 
> Can we use the same logic for expected arg (previous patch) on the
> ret_type. Or did I miss it:/ Need some coffee I guess.

OK, I tracked this down. It looks like we rely on mark_ptr_or_null_reg()
to update the reg->tyype to PTR_TO_SOCKET. I do wonder if it would be
a bit more straight forward to do something similar to the previous
patch and refine it earlier to PTR_TO_SOCKET_OR_NULL, but should be
safe as-is for now.

I still have the below question though.

> 
> >  			goto error;
> >  		break;
> >  	case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
> > diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> > index 018367fb889f..b2c886c34566 100644
> > --- a/net/core/sock_map.c
> > +++ b/net/core/sock_map.c
> > @@ -603,6 +603,28 @@ int sock_map_update_elem_sys(struct bpf_map *map, void *key,
> >  	return ret;
> >  }
> >  
> > +static int sock_map_update_elem(struct bpf_map *map, void *key,
> > +				void *value, u64 flags)
> > +{
> > +	struct sock *sk = (struct sock *)value;
> > +	int ret;
> > +
> > +	if (!sock_map_sk_is_suitable(sk))
> > +		return -EOPNOTSUPP;
> > +
> > +	local_bh_disable();
> > +	bh_lock_sock(sk);
> 
> How do ensure we are not being called from some context which
> already has the bh_lock_sock() held? It seems we can call map_update_elem()
> from any context, kprobes, tc, xdp, etc.?
> 
> > +	if (!sock_map_sk_state_allowed(sk))
> > +		ret = -EOPNOTSUPP;
> > +	else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
> > +		ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
> > +	else
> > +		ret = sock_hash_update_common(map, key, sk, flags);
> > +	bh_unlock_sock(sk);
> > +	local_bh_enable();
> > +	return ret;
> > +}
> > +
> >  BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, sops,
> >  	   struct bpf_map *, map, void *, key, u64, flags)
> >  {
> > @@ -687,6 +709,7 @@ const struct bpf_map_ops sock_map_ops = {
> >  	.map_free		= sock_map_free,
> >  	.map_get_next_key	= sock_map_get_next_key,
> >  	.map_lookup_elem_sys_only = sock_map_lookup_sys,
> > +	.map_update_elem	= sock_map_update_elem,
> >  	.map_delete_elem	= sock_map_delete_elem,
> >  	.map_lookup_elem	= sock_map_lookup,
> >  	.map_release_uref	= sock_map_release_progs,
> > @@ -1180,6 +1203,7 @@ const struct bpf_map_ops sock_hash_ops = {
> >  	.map_alloc		= sock_hash_alloc,
> >  	.map_free		= sock_hash_free,
> >  	.map_get_next_key	= sock_hash_get_next_key,
> > +	.map_update_elem	= sock_map_update_elem,
> >  	.map_delete_elem	= sock_hash_delete_elem,
> >  	.map_lookup_elem	= sock_hash_lookup,
> >  	.map_lookup_elem_sys_only = sock_hash_lookup_sys,
> > -- 
> > 2.25.1
> > 
> 
> 



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-19 22:41     ` John Fastabend
@ 2020-08-20 11:33       ` Lorenz Bauer
  2020-08-20 14:45         ` Yonghong Song
  0 siblings, 1 reply; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-20 11:33 UTC (permalink / raw)
  To: John Fastabend
  Cc: Jakub Sitnicki, Alexei Starovoitov, Daniel Borkmann,
	David S. Miller, Jakub Kicinski, kernel-team, Networking, bpf,
	LKML

On Wed, 19 Aug 2020 at 23:41, John Fastabend <john.fastabend@gmail.com> wrote:
>
> John Fastabend wrote:
> > Lorenz Bauer wrote:
> > > Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
> > > context. The synchronization required for this is a bit fiddly: we
> > > need to prevent the socket from changing it's state while we add it
> > > to the sockmap, since we rely on getting a callback via
> > > sk_prot->unhash. However, we can't just lock_sock like in
> > > sock_map_sk_acquire because that might sleep. So instead we disable
> > > softirq processing and use bh_lock_sock to prevent further
> > > modification.
> > >
> > > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > > ---
> > >  kernel/bpf/verifier.c |  6 ++++--
> > >  net/core/sock_map.c   | 24 ++++++++++++++++++++++++
> > >  2 files changed, 28 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > > index 47f9b94bb9d4..421fccf18dea 100644
> > > --- a/kernel/bpf/verifier.c
> > > +++ b/kernel/bpf/verifier.c
> > > @@ -4254,7 +4254,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
> > >                 func_id != BPF_FUNC_map_delete_elem &&
> > >                 func_id != BPF_FUNC_msg_redirect_map &&
> > >                 func_id != BPF_FUNC_sk_select_reuseport &&
> > > -               func_id != BPF_FUNC_map_lookup_elem)
> > > +               func_id != BPF_FUNC_map_lookup_elem &&
> > > +               func_id != BPF_FUNC_map_update_elem)
> > >                     goto error;
> > >             break;
> > >     case BPF_MAP_TYPE_SOCKHASH:
> > > @@ -4263,7 +4264,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
> > >                 func_id != BPF_FUNC_map_delete_elem &&
> > >                 func_id != BPF_FUNC_msg_redirect_hash &&
> > >                 func_id != BPF_FUNC_sk_select_reuseport &&
> > > -               func_id != BPF_FUNC_map_lookup_elem)
> > > +               func_id != BPF_FUNC_map_lookup_elem &&
> > > +               func_id != BPF_FUNC_map_update_elem)
> >
> > I lost track of a detail here, map_lookup_elem should return
> > PTR_TO_MAP_VALUE_OR_NULL but if we want to feed that back into
> > the map_update_elem() we need to return PTR_TO_SOCKET_OR_NULL
> > and then presumably have a null check to get a PTR_TO_SOCKET
> > type as expect.
> >
> > Can we use the same logic for expected arg (previous patch) on the
> > ret_type. Or did I miss it:/ Need some coffee I guess.
>
> OK, I tracked this down. It looks like we rely on mark_ptr_or_null_reg()
> to update the reg->tyype to PTR_TO_SOCKET. I do wonder if it would be
> a bit more straight forward to do something similar to the previous
> patch and refine it earlier to PTR_TO_SOCKET_OR_NULL, but should be
> safe as-is for now.

Yes, it took me a while to figure this out as well. I think we can use
the same approach, but I wanted to keep this series simple.

> I still have the below question though.
>
> >
> > >                     goto error;
> > >             break;
> > >     case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
> > > diff --git a/net/core/sock_map.c b/net/core/sock_map.c
> > > index 018367fb889f..b2c886c34566 100644
> > > --- a/net/core/sock_map.c
> > > +++ b/net/core/sock_map.c
> > > @@ -603,6 +603,28 @@ int sock_map_update_elem_sys(struct bpf_map *map, void *key,
> > >     return ret;
> > >  }
> > >
> > > +static int sock_map_update_elem(struct bpf_map *map, void *key,
> > > +                           void *value, u64 flags)
> > > +{
> > > +   struct sock *sk = (struct sock *)value;
> > > +   int ret;
> > > +
> > > +   if (!sock_map_sk_is_suitable(sk))
> > > +           return -EOPNOTSUPP;
> > > +
> > > +   local_bh_disable();
> > > +   bh_lock_sock(sk);
> >
> > How do ensure we are not being called from some context which
> > already has the bh_lock_sock() held? It seems we can call map_update_elem()
> > from any context, kprobes, tc, xdp, etc.?

Yeah, to be honest I'm not entirely sure.

XDP, TC, sk_lookup are fine I think. We have bpf_sk_lookup_tcp and
friends, but these aren't locked, and the BPF doesn't run in a context
where there is a locked socket.

As you point out, kprobes / tracing is problematic because the probe
_can_ run at a point where an sk is locked. If the tracing program
somehow gets a hold of this socket via sk_lookup_* or
a sockmap the program could deadlock.

bpf_sock_ops is also problematic since ctx->sk is in various states of
locking. For example, BPF_SOCK_OPS_TCP_LISTEN_CB is called with
lock_sock held, so unproblematic. BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB
on the other hand is called with the spinlock held.

It seems to me like the only option is to instead only allow updates
from "safe" contexts, such as XDP, tc, bpf_iter etc.

Am I missing something?


> >
> > > +   if (!sock_map_sk_state_allowed(sk))
> > > +           ret = -EOPNOTSUPP;
> > > +   else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
> > > +           ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
> > > +   else
> > > +           ret = sock_hash_update_common(map, key, sk, flags);
> > > +   bh_unlock_sock(sk);
> > > +   local_bh_enable();
> > > +   return ret;
> > > +}
> > > +
> > >  BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, sops,
> > >        struct bpf_map *, map, void *, key, u64, flags)
> > >  {
> > > @@ -687,6 +709,7 @@ const struct bpf_map_ops sock_map_ops = {
> > >     .map_free               = sock_map_free,
> > >     .map_get_next_key       = sock_map_get_next_key,
> > >     .map_lookup_elem_sys_only = sock_map_lookup_sys,
> > > +   .map_update_elem        = sock_map_update_elem,
> > >     .map_delete_elem        = sock_map_delete_elem,
> > >     .map_lookup_elem        = sock_map_lookup,
> > >     .map_release_uref       = sock_map_release_progs,
> > > @@ -1180,6 +1203,7 @@ const struct bpf_map_ops sock_hash_ops = {
> > >     .map_alloc              = sock_hash_alloc,
> > >     .map_free               = sock_hash_free,
> > >     .map_get_next_key       = sock_hash_get_next_key,
> > > +   .map_update_elem        = sock_map_update_elem,
> > >     .map_delete_elem        = sock_hash_delete_elem,
> > >     .map_lookup_elem        = sock_hash_lookup,
> > >     .map_lookup_elem_sys_only = sock_hash_lookup_sys,
> > > --
> > > 2.25.1
> > >
> >
> >
>
>


--
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 6/6] selftests: bpf: test sockmap update from BPF
  2020-08-19 20:45   ` Yonghong Song
@ 2020-08-20 11:58     ` Lorenz Bauer
  2020-08-20 14:49       ` Yonghong Song
  0 siblings, 1 reply; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-20 11:58 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jakub Sitnicki, John Fastabend, Shuah Khan, Alexei Starovoitov,
	Daniel Borkmann, kernel-team, LKML, linux-kselftest, Networking,
	bpf

On Wed, 19 Aug 2020 at 21:46, Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> > Add a test which copies a socket from a sockmap into another sockmap
> > or sockhash. This excercises bpf_map_update_elem support from BPF
> > context. Compare the socket cookies from source and destination to
> > ensure that the copy succeeded.
> >
> > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > ---
> >   .../selftests/bpf/prog_tests/sockmap_basic.c  | 76 +++++++++++++++++++
> >   .../selftests/bpf/progs/test_sockmap_copy.c   | 48 ++++++++++++
> >   2 files changed, 124 insertions(+)
> >   create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> >
> > diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> > index 96e7b7f84c65..d30cabc00e9e 100644
> > --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> > +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> > @@ -4,6 +4,7 @@
> >
> >   #include "test_progs.h"
> >   #include "test_skmsg_load_helpers.skel.h"
> > +#include "test_sockmap_copy.skel.h"
> >
> >   #define TCP_REPAIR          19      /* TCP sock is under repair right now */
> >
> > @@ -101,6 +102,77 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
> >       test_skmsg_load_helpers__destroy(skel);
> >   }
> >
> > +static void test_sockmap_copy(enum bpf_map_type map_type)
> > +{
> > +     struct bpf_prog_test_run_attr attr;
> > +     struct test_sockmap_copy *skel;
> > +     __u64 src_cookie, dst_cookie;
> > +     int err, prog, s, src, dst;
> > +     const __u32 zero = 0;
> > +     char dummy[14] = {0};
> > +
> > +     s = connected_socket_v4();
>
> Maybe change variable name to "sk" for better clarity?

Yup!

>
> > +     if (CHECK_FAIL(s == -1))
> > +             return;
> > +
> > +     skel = test_sockmap_copy__open_and_load();
> > +     if (CHECK_FAIL(!skel)) {
> > +             close(s);
> > +             perror("test_sockmap_copy__open_and_load");
> > +             return;
> > +     }
>
> Could you use CHECK instead of CHECK_FAIL?
> With CHECK, you can print additional information without perror.

I avoid CHECK because it requires `duration`, which doesn't make sense
for most things that I call CHECK_FAIL on here. So either it outputs 0
nsec (which is bogus) or it outputs the value from the last
bpf_prog_test_run call (which is also bogus). How do other tests
handle this? Just ignore it?

>
>
> > +
> > +     prog = bpf_program__fd(skel->progs.copy_sock_map);
> > +     src = bpf_map__fd(skel->maps.src);
> > +     if (map_type == BPF_MAP_TYPE_SOCKMAP)
> > +             dst = bpf_map__fd(skel->maps.dst_sock_map);
> > +     else
> > +             dst = bpf_map__fd(skel->maps.dst_sock_hash);
> > +
> > +     err = bpf_map_update_elem(src, &zero, &s, BPF_NOEXIST);
>
> The map defined in bpf program is __u64 and here "s" is int.
> Any potential issues?

Hm, good point. This is a quirk of the sockmap API, I need to dig into
this a bit.

>
> > +     if (CHECK_FAIL(err)) {
> > +             perror("bpf_map_update");
> > +             goto out;
> > +     }
> > +
> > +     err = bpf_map_lookup_elem(src, &zero, &src_cookie);
> > +     if (CHECK_FAIL(err)) {
> > +             perror("bpf_map_lookup_elem(src)");
> > +             goto out;
> > +     }
> > +
> > +     attr = (struct bpf_prog_test_run_attr){
> > +             .prog_fd = prog,
> > +             .repeat = 1,
> > +             .data_in = dummy,
> > +             .data_size_in = sizeof(dummy),
> > +     };
> > +
> > +     err = bpf_prog_test_run_xattr(&attr);
> > +     if (err) {
>
> You can use CHECK macro here.
>
> > +             test__fail();
> > +             perror("bpf_prog_test_run");
> > +             goto out;
> > +     } else if (!attr.retval) {
> > +             PRINT_FAIL("bpf_prog_test_run: program returned %u\n",
> > +                        attr.retval);
> > +             goto out;
> > +     }
> > +
> > +     err = bpf_map_lookup_elem(dst, &zero, &dst_cookie);
> > +     if (CHECK_FAIL(err)) {
> > +             perror("bpf_map_lookup_elem(dst)");
> > +             goto out;
> > +     }
> > +
> > +     if (dst_cookie != src_cookie)
> > +             PRINT_FAIL("cookie %llu != %llu\n", dst_cookie, src_cookie);
>
> Just replace the whole if statement with a CHECK macro.

See above, re duration.

>
> > +
> > +out:
> > +     close(s);
> > +     test_sockmap_copy__destroy(skel);
> > +}
> > +
> >   void test_sockmap_basic(void)
> >   {
> >       if (test__start_subtest("sockmap create_update_free"))
> > @@ -111,4 +183,8 @@ void test_sockmap_basic(void)
> >               test_skmsg_helpers(BPF_MAP_TYPE_SOCKMAP);
> >       if (test__start_subtest("sockhash sk_msg load helpers"))
> >               test_skmsg_helpers(BPF_MAP_TYPE_SOCKHASH);
> > +     if (test__start_subtest("sockmap copy"))
> > +             test_sockmap_copy(BPF_MAP_TYPE_SOCKMAP);
> > +     if (test__start_subtest("sockhash copy"))
> > +             test_sockmap_copy(BPF_MAP_TYPE_SOCKHASH);
> >   }
> > diff --git a/tools/testing/selftests/bpf/progs/test_sockmap_copy.c b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> > new file mode 100644
> > index 000000000000..9d0c9f28cab2
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> > @@ -0,0 +1,48 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +// Copyright (c) 2020 Cloudflare
> > +#include "vmlinux.h"
> > +#include <bpf/bpf_helpers.h>
> > +
> > +struct {
> > +     __uint(type, BPF_MAP_TYPE_SOCKMAP);
> > +     __uint(max_entries, 1);
> > +     __type(key, __u32);
> > +     __type(value, __u64);
> > +} src SEC(".maps");
> > +
> > +struct {
> > +     __uint(type, BPF_MAP_TYPE_SOCKMAP);
> > +     __uint(max_entries, 1);
> > +     __type(key, __u32);
> > +     __type(value, __u64);
> > +} dst_sock_map SEC(".maps");
> > +
> > +struct {
> > +     __uint(type, BPF_MAP_TYPE_SOCKHASH);
> > +     __uint(max_entries, 1);
> > +     __type(key, __u32);
> > +     __type(value, __u64);
> > +} dst_sock_hash SEC(".maps");
> > +
> > +SEC("classifier/copy_sock_map")
> > +int copy_sock_map(void *ctx)
> > +{
> > +     struct bpf_sock *sk;
> > +     bool failed = false;
> > +     __u32 key = 0;
> > +
> > +     sk = bpf_map_lookup_elem(&src, &key);
> > +     if (!sk)
> > +             return SK_DROP;
> > +
> > +     if (bpf_map_update_elem(&dst_sock_map, &key, sk, 0))
> > +             failed = true;
> > +
> > +     if (bpf_map_update_elem(&dst_sock_hash, &key, sk, 0))
> > +             failed = true;
> > +
> > +     bpf_sk_release(sk);
> > +     return failed ? SK_DROP : SK_PASS;
> > +}
> > +
> > +char _license[] SEC("license") = "GPL";
> >



-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash
  2020-08-19 20:13   ` Yonghong Song
@ 2020-08-20 12:33     ` Lorenz Bauer
  0 siblings, 0 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-20 12:33 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jakub Sitnicki, John Fastabend, Alexei Starovoitov,
	Daniel Borkmann, kernel-team, Networking, bpf, LKML

On Wed, 19 Aug 2020 at 21:13, Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> > The verifier assumes that map values are simple blobs of memory, and
> > therefore treats ARG_PTR_TO_MAP_VALUE, etc. as such. However, there are
> > map types where this isn't true. For example, sockmap and sockhash store
> > sockets. In general this isn't a big problem: we can just
> > write helpers that explicitly requests PTR_TO_SOCKET instead of
> > ARG_PTR_TO_MAP_VALUE.
> >
> > The one exception are the standard map helpers like map_update_elem,
> > map_lookup_elem, etc. Here it would be nice we could overload the
> > function prototype for different kinds of maps. Unfortunately, this
> > isn't entirely straight forward:
> > We only know the type of the map once we have resolved meta->map_ptr
> > in check_func_arg. This means we can't swap out the prototype
> > in check_helper_call until we're half way through the function.
> >
> > Instead, modify check_func_arg to treat ARG_PTR_TO_MAP_VALUE* to
> > mean "the native type for the map" instead of "pointer to memory"
> > for sockmap and sockhash. This means we don't have to modify the
> > function prototype at all
> >
> > Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> > ---
> >   kernel/bpf/verifier.c | 40 ++++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 40 insertions(+)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index b6ccfce3bf4c..47f9b94bb9d4 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -3872,6 +3872,38 @@ static int int_ptr_type_to_size(enum bpf_arg_type type)
> >       return -EINVAL;
> >   }
> >
> > +static int override_map_arg_type(struct bpf_verifier_env *env,
> > +                              const struct bpf_call_arg_meta *meta,
> > +                              enum bpf_arg_type *arg_type)
> > +{
> > +     if (!meta->map_ptr) {
> > +             /* kernel subsystem misconfigured verifier */
> > +             verbose(env, "invalid map_ptr to access map->type\n");
> > +             return -EACCES;
> > +     }
> > +
> > +     switch (meta->map_ptr->map_type) {
> > +     case BPF_MAP_TYPE_SOCKMAP:
> > +     case BPF_MAP_TYPE_SOCKHASH:
> > +             switch (*arg_type) {
> > +             case ARG_PTR_TO_MAP_VALUE:
> > +                     *arg_type = ARG_PTR_TO_SOCKET;
> > +                     break;
> > +             case ARG_PTR_TO_MAP_VALUE_OR_NULL:
> > +                     *arg_type = ARG_PTR_TO_SOCKET_OR_NULL;
> > +                     break;
> > +             default:
> > +                     verbose(env, "invalid arg_type for sockmap/sockhash\n");
> > +                     return -EINVAL;
> > +             }
> > +             break;
> > +
> > +     default:
> > +             break;
> > +     }
> > +     return 0;
> > +}
> > +
> >   static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                         struct bpf_call_arg_meta *meta,
> >                         const struct bpf_func_proto *fn)
> > @@ -3904,6 +3936,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >               return -EACCES;
> >       }
> >
> > +     if (arg_type == ARG_PTR_TO_MAP_VALUE ||
> > +         arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> > +         arg_type == ARG_PTR_TO_MAP_VALUE_OR_NULL) {
>
> We probably do not need ARG_PTR_TO_UNINIT_MAP_VALUE here.
>
> Do we need ARG_PTR_TO_MAP_VALUE_OR_NULL? bpf_map_update_elem arg type
> is ARG_PTR_TO_MAP_VALUE.

I did this to be consistent: in a single function definition you
either get the map specific
types or the regular semantics. You don't get to mix and match them.
For the same
reason I included ARG_PTR_TO_UNINIT_MAP_VALUE: the semantics don't make
sense for sockmap, so a function using this doesn't make sense either.

>
> > +             err = override_map_arg_type(env, meta, &arg_type);
> > +             if (err)
> > +                     return err;
> > +     }
> > +
> >       if (arg_type == ARG_PTR_TO_MAP_KEY ||
> >           arg_type == ARG_PTR_TO_MAP_VALUE ||
> >           arg_type == ARG_PTR_TO_UNINIT_MAP_VALUE ||
> >



-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF
  2020-08-20 11:33       ` Lorenz Bauer
@ 2020-08-20 14:45         ` Yonghong Song
  0 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2020-08-20 14:45 UTC (permalink / raw)
  To: Lorenz Bauer, John Fastabend
  Cc: Jakub Sitnicki, Alexei Starovoitov, Daniel Borkmann,
	David S. Miller, Jakub Kicinski, kernel-team, Networking, bpf,
	LKML



On 8/20/20 4:33 AM, Lorenz Bauer wrote:
> On Wed, 19 Aug 2020 at 23:41, John Fastabend <john.fastabend@gmail.com> wrote:
>>
>> John Fastabend wrote:
>>> Lorenz Bauer wrote:
>>>> Allow calling bpf_map_update_elem on sockmap and sockhash from a BPF
>>>> context. The synchronization required for this is a bit fiddly: we
>>>> need to prevent the socket from changing it's state while we add it
>>>> to the sockmap, since we rely on getting a callback via
>>>> sk_prot->unhash. However, we can't just lock_sock like in
>>>> sock_map_sk_acquire because that might sleep. So instead we disable
>>>> softirq processing and use bh_lock_sock to prevent further
>>>> modification.
>>>>
>>>> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
>>>> ---
>>>>   kernel/bpf/verifier.c |  6 ++++--
>>>>   net/core/sock_map.c   | 24 ++++++++++++++++++++++++
>>>>   2 files changed, 28 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>>> index 47f9b94bb9d4..421fccf18dea 100644
>>>> --- a/kernel/bpf/verifier.c
>>>> +++ b/kernel/bpf/verifier.c
>>>> @@ -4254,7 +4254,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>>>>                  func_id != BPF_FUNC_map_delete_elem &&
>>>>                  func_id != BPF_FUNC_msg_redirect_map &&
>>>>                  func_id != BPF_FUNC_sk_select_reuseport &&
>>>> -               func_id != BPF_FUNC_map_lookup_elem)
>>>> +               func_id != BPF_FUNC_map_lookup_elem &&
>>>> +               func_id != BPF_FUNC_map_update_elem)
>>>>                      goto error;
>>>>              break;
>>>>      case BPF_MAP_TYPE_SOCKHASH:
>>>> @@ -4263,7 +4264,8 @@ static int check_map_func_compatibility(struct bpf_verifier_env *env,
>>>>                  func_id != BPF_FUNC_map_delete_elem &&
>>>>                  func_id != BPF_FUNC_msg_redirect_hash &&
>>>>                  func_id != BPF_FUNC_sk_select_reuseport &&
>>>> -               func_id != BPF_FUNC_map_lookup_elem)
>>>> +               func_id != BPF_FUNC_map_lookup_elem &&
>>>> +               func_id != BPF_FUNC_map_update_elem)
>>>
>>> I lost track of a detail here, map_lookup_elem should return
>>> PTR_TO_MAP_VALUE_OR_NULL but if we want to feed that back into
>>> the map_update_elem() we need to return PTR_TO_SOCKET_OR_NULL
>>> and then presumably have a null check to get a PTR_TO_SOCKET
>>> type as expect.
>>>
>>> Can we use the same logic for expected arg (previous patch) on the
>>> ret_type. Or did I miss it:/ Need some coffee I guess.
>>
>> OK, I tracked this down. It looks like we rely on mark_ptr_or_null_reg()
>> to update the reg->tyype to PTR_TO_SOCKET. I do wonder if it would be
>> a bit more straight forward to do something similar to the previous
>> patch and refine it earlier to PTR_TO_SOCKET_OR_NULL, but should be
>> safe as-is for now.
> 
> Yes, it took me a while to figure this out as well. I think we can use
> the same approach, but I wanted to keep this series simple.
> 
>> I still have the below question though.
>>
>>>
>>>>                      goto error;
>>>>              break;
>>>>      case BPF_MAP_TYPE_REUSEPORT_SOCKARRAY:
>>>> diff --git a/net/core/sock_map.c b/net/core/sock_map.c
>>>> index 018367fb889f..b2c886c34566 100644
>>>> --- a/net/core/sock_map.c
>>>> +++ b/net/core/sock_map.c
>>>> @@ -603,6 +603,28 @@ int sock_map_update_elem_sys(struct bpf_map *map, void *key,
>>>>      return ret;
>>>>   }
>>>>
>>>> +static int sock_map_update_elem(struct bpf_map *map, void *key,
>>>> +                           void *value, u64 flags)
>>>> +{
>>>> +   struct sock *sk = (struct sock *)value;
>>>> +   int ret;
>>>> +
>>>> +   if (!sock_map_sk_is_suitable(sk))
>>>> +           return -EOPNOTSUPP;
>>>> +
>>>> +   local_bh_disable();
>>>> +   bh_lock_sock(sk);
>>>
>>> How do ensure we are not being called from some context which
>>> already has the bh_lock_sock() held? It seems we can call map_update_elem()
>>> from any context, kprobes, tc, xdp, etc.?
> 
> Yeah, to be honest I'm not entirely sure.
> 
> XDP, TC, sk_lookup are fine I think. We have bpf_sk_lookup_tcp and
> friends, but these aren't locked, and the BPF doesn't run in a context
> where there is a locked socket.
> 
> As you point out, kprobes / tracing is problematic because the probe
> _can_ run at a point where an sk is locked. If the tracing program
> somehow gets a hold of this socket via sk_lookup_* or
> a sockmap the program could deadlock.

Thanks for John to bring this up. I looked at codes a few times
but somehow missed the potential deadlock issues.

kprobes/non-iter tracing/perf_event, freplace of these kprobes etc. 
programs, may have issues. tracepoint probably not since people
probably won't add tracepoint inside a spinlock.

> 
> bpf_sock_ops is also problematic since ctx->sk is in various states of
> locking. For example, BPF_SOCK_OPS_TCP_LISTEN_CB is called with
> lock_sock held, so unproblematic. BPF_SOCK_OPS_ACTIVE_ESTABLISHED_CB
> on the other hand is called with the spinlock held.
> 
> It seems to me like the only option is to instead only allow updates
> from "safe" contexts, such as XDP, tc, bpf_iter etc.

This should be okay, I think. You can start from small and then
grows as more use cases emerge.

> 
> Am I missing something?
> 
> 
>>>
>>>> +   if (!sock_map_sk_state_allowed(sk))
>>>> +           ret = -EOPNOTSUPP;
>>>> +   else if (map->map_type == BPF_MAP_TYPE_SOCKMAP)
>>>> +           ret = sock_map_update_common(map, *(u32 *)key, sk, flags);
>>>> +   else
>>>> +           ret = sock_hash_update_common(map, key, sk, flags);
>>>> +   bh_unlock_sock(sk);
>>>> +   local_bh_enable();
>>>> +   return ret;
>>>> +}
>>>> +
>>>>   BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, sops,
>>>>         struct bpf_map *, map, void *, key, u64, flags)
>>>>   {
>>>> @@ -687,6 +709,7 @@ const struct bpf_map_ops sock_map_ops = {
>>>>      .map_free               = sock_map_free,
>>>>      .map_get_next_key       = sock_map_get_next_key,
>>>>      .map_lookup_elem_sys_only = sock_map_lookup_sys,
>>>> +   .map_update_elem        = sock_map_update_elem,
>>>>      .map_delete_elem        = sock_map_delete_elem,
>>>>      .map_lookup_elem        = sock_map_lookup,
>>>>      .map_release_uref       = sock_map_release_progs,
>>>> @@ -1180,6 +1203,7 @@ const struct bpf_map_ops sock_hash_ops = {
>>>>      .map_alloc              = sock_hash_alloc,
>>>>      .map_free               = sock_hash_free,
>>>>      .map_get_next_key       = sock_hash_get_next_key,
>>>> +   .map_update_elem        = sock_map_update_elem,
>>>>      .map_delete_elem        = sock_hash_delete_elem,
>>>>      .map_lookup_elem        = sock_hash_lookup,
>>>>      .map_lookup_elem_sys_only = sock_hash_lookup_sys,
>>>> --
>>>> 2.25.1
>>>>
>>>
>>>
>>
>>
> 
> 
> --
> Lorenz Bauer  |  Systems Engineer
> 6th Floor, County Hall/The Riverside Building, SE1 7PB, UK
> 
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cloudflare.com&d=DwIBaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=DA8e1B5r073vIqRrFz7MRA&m=1Gk85bZFwViEPzlsGBklXLgdbxI4Q9F505taA25KfBI&s=pcsCdyC4ZCnSqXgJJc4rjmFx1C8Hiz49KCOxDf6gagg&e=
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 6/6] selftests: bpf: test sockmap update from BPF
  2020-08-20 11:58     ` Lorenz Bauer
@ 2020-08-20 14:49       ` Yonghong Song
  2020-08-20 16:12         ` Lorenz Bauer
  0 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2020-08-20 14:49 UTC (permalink / raw)
  To: Lorenz Bauer
  Cc: Jakub Sitnicki, John Fastabend, Shuah Khan, Alexei Starovoitov,
	Daniel Borkmann, kernel-team, LKML, linux-kselftest, Networking,
	bpf



On 8/20/20 4:58 AM, Lorenz Bauer wrote:
> On Wed, 19 Aug 2020 at 21:46, Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 8/19/20 2:24 AM, Lorenz Bauer wrote:
>>> Add a test which copies a socket from a sockmap into another sockmap
>>> or sockhash. This excercises bpf_map_update_elem support from BPF
>>> context. Compare the socket cookies from source and destination to
>>> ensure that the copy succeeded.
>>>
>>> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
>>> ---
>>>    .../selftests/bpf/prog_tests/sockmap_basic.c  | 76 +++++++++++++++++++
>>>    .../selftests/bpf/progs/test_sockmap_copy.c   | 48 ++++++++++++
>>>    2 files changed, 124 insertions(+)
>>>    create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c
>>>
>>> diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
>>> index 96e7b7f84c65..d30cabc00e9e 100644
>>> --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
>>> +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
>>> @@ -4,6 +4,7 @@
>>>
>>>    #include "test_progs.h"
>>>    #include "test_skmsg_load_helpers.skel.h"
>>> +#include "test_sockmap_copy.skel.h"
>>>
>>>    #define TCP_REPAIR          19      /* TCP sock is under repair right now */
>>>
>>> @@ -101,6 +102,77 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
>>>        test_skmsg_load_helpers__destroy(skel);
>>>    }
>>>
>>> +static void test_sockmap_copy(enum bpf_map_type map_type)
>>> +{
>>> +     struct bpf_prog_test_run_attr attr;
>>> +     struct test_sockmap_copy *skel;
>>> +     __u64 src_cookie, dst_cookie;
>>> +     int err, prog, s, src, dst;
>>> +     const __u32 zero = 0;
>>> +     char dummy[14] = {0};
>>> +
>>> +     s = connected_socket_v4();
>>
>> Maybe change variable name to "sk" for better clarity?
> 
> Yup!
> 
>>
>>> +     if (CHECK_FAIL(s == -1))
>>> +             return;
>>> +
>>> +     skel = test_sockmap_copy__open_and_load();
>>> +     if (CHECK_FAIL(!skel)) {
>>> +             close(s);
>>> +             perror("test_sockmap_copy__open_and_load");
>>> +             return;
>>> +     }
>>
>> Could you use CHECK instead of CHECK_FAIL?
>> With CHECK, you can print additional information without perror.
> 
> I avoid CHECK because it requires `duration`, which doesn't make sense
> for most things that I call CHECK_FAIL on here. So either it outputs 0
> nsec (which is bogus) or it outputs the value from the last
> bpf_prog_test_run call (which is also bogus). How do other tests
> handle this? Just ignore it?

Just ignore it. You can define a static variable duration in the 
beginning of file and then use CHECK in the rest of file.

> 
>>
>>
>>> +
>>> +     prog = bpf_program__fd(skel->progs.copy_sock_map);
>>> +     src = bpf_map__fd(skel->maps.src);
>>> +     if (map_type == BPF_MAP_TYPE_SOCKMAP)
>>> +             dst = bpf_map__fd(skel->maps.dst_sock_map);
>>> +     else
>>> +             dst = bpf_map__fd(skel->maps.dst_sock_hash);
>>> +
[...]

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH bpf-next 6/6] selftests: bpf: test sockmap update from BPF
  2020-08-20 14:49       ` Yonghong Song
@ 2020-08-20 16:12         ` Lorenz Bauer
  0 siblings, 0 replies; 25+ messages in thread
From: Lorenz Bauer @ 2020-08-20 16:12 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jakub Sitnicki, John Fastabend, Shuah Khan, Alexei Starovoitov,
	Daniel Borkmann, kernel-team, LKML, linux-kselftest, Networking,
	bpf

On Thu, 20 Aug 2020 at 15:49, Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 8/20/20 4:58 AM, Lorenz Bauer wrote:
> > On Wed, 19 Aug 2020 at 21:46, Yonghong Song <yhs@fb.com> wrote:
> >>
> >>
> >>
> >> On 8/19/20 2:24 AM, Lorenz Bauer wrote:
> >>> Add a test which copies a socket from a sockmap into another sockmap
> >>> or sockhash. This excercises bpf_map_update_elem support from BPF
> >>> context. Compare the socket cookies from source and destination to
> >>> ensure that the copy succeeded.
> >>>
> >>> Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
> >>> ---
> >>>    .../selftests/bpf/prog_tests/sockmap_basic.c  | 76 +++++++++++++++++++
> >>>    .../selftests/bpf/progs/test_sockmap_copy.c   | 48 ++++++++++++
> >>>    2 files changed, 124 insertions(+)
> >>>    create mode 100644 tools/testing/selftests/bpf/progs/test_sockmap_copy.c
> >>>
> >>> diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> >>> index 96e7b7f84c65..d30cabc00e9e 100644
> >>> --- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> >>> +++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
> >>> @@ -4,6 +4,7 @@
> >>>
> >>>    #include "test_progs.h"
> >>>    #include "test_skmsg_load_helpers.skel.h"
> >>> +#include "test_sockmap_copy.skel.h"
> >>>
> >>>    #define TCP_REPAIR          19      /* TCP sock is under repair right now */
> >>>
> >>> @@ -101,6 +102,77 @@ static void test_skmsg_helpers(enum bpf_map_type map_type)
> >>>        test_skmsg_load_helpers__destroy(skel);
> >>>    }
> >>>
> >>> +static void test_sockmap_copy(enum bpf_map_type map_type)
> >>> +{
> >>> +     struct bpf_prog_test_run_attr attr;
> >>> +     struct test_sockmap_copy *skel;
> >>> +     __u64 src_cookie, dst_cookie;
> >>> +     int err, prog, s, src, dst;
> >>> +     const __u32 zero = 0;
> >>> +     char dummy[14] = {0};
> >>> +
> >>> +     s = connected_socket_v4();
> >>
> >> Maybe change variable name to "sk" for better clarity?
> >
> > Yup!
> >
> >>
> >>> +     if (CHECK_FAIL(s == -1))
> >>> +             return;
> >>> +
> >>> +     skel = test_sockmap_copy__open_and_load();
> >>> +     if (CHECK_FAIL(!skel)) {
> >>> +             close(s);
> >>> +             perror("test_sockmap_copy__open_and_load");
> >>> +             return;
> >>> +     }
> >>
> >> Could you use CHECK instead of CHECK_FAIL?
> >> With CHECK, you can print additional information without perror.
> >
> > I avoid CHECK because it requires `duration`, which doesn't make sense
> > for most things that I call CHECK_FAIL on here. So either it outputs 0
> > nsec (which is bogus) or it outputs the value from the last
> > bpf_prog_test_run call (which is also bogus). How do other tests
> > handle this? Just ignore it?
>
> Just ignore it. You can define a static variable duration in the
> beginning of file and then use CHECK in the rest of file.

Ok, will do in v3!

>
> >
> >>
> >>
> >>> +
> >>> +     prog = bpf_program__fd(skel->progs.copy_sock_map);
> >>> +     src = bpf_map__fd(skel->maps.src);
> >>> +     if (map_type == BPF_MAP_TYPE_SOCKMAP)
> >>> +             dst = bpf_map__fd(skel->maps.dst_sock_map);
> >>> +     else
> >>> +             dst = bpf_map__fd(skel->maps.dst_sock_hash);
> >>> +
> [...]



-- 
Lorenz Bauer  |  Systems Engineer
6th Floor, County Hall/The Riverside Building, SE1 7PB, UK

www.cloudflare.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2020-08-20 16:12 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-19  9:24 [PATCH bpf-next 0/6] Allow updating sockmap / sockhash from BPF Lorenz Bauer
2020-08-19  9:24 ` [PATCH bpf-next 1/6] net: sk_msg: simplify sk_psock initialization Lorenz Bauer
2020-08-19 20:05   ` John Fastabend
2020-08-19  9:24 ` [PATCH bpf-next 2/6] bpf: sockmap: merge sockmap and sockhash update functions Lorenz Bauer
2020-08-19 15:48   ` Jakub Kicinski
2020-08-19 18:50   ` Yonghong Song
2020-08-19 20:11   ` John Fastabend
2020-08-19  9:24 ` [PATCH bpf-next 3/6] bpf: sockmap: call sock_map_update_elem directly Lorenz Bauer
2020-08-19 19:15   ` Yonghong Song
2020-08-19 20:29   ` John Fastabend
2020-08-19  9:24 ` [PATCH bpf-next 4/6] bpf: override the meaning of ARG_PTR_TO_MAP_VALUE for sockmap and sockhash Lorenz Bauer
2020-08-19 20:13   ` Yonghong Song
2020-08-20 12:33     ` Lorenz Bauer
2020-08-19 20:51   ` John Fastabend
2020-08-19  9:24 ` [PATCH bpf-next 5/6] bpf: sockmap: allow update from BPF Lorenz Bauer
2020-08-19 20:35   ` Yonghong Song
2020-08-19 21:22   ` John Fastabend
2020-08-19 22:41     ` John Fastabend
2020-08-20 11:33       ` Lorenz Bauer
2020-08-20 14:45         ` Yonghong Song
2020-08-19  9:24 ` [PATCH bpf-next 6/6] selftests: bpf: test sockmap " Lorenz Bauer
2020-08-19 20:45   ` Yonghong Song
2020-08-20 11:58     ` Lorenz Bauer
2020-08-20 14:49       ` Yonghong Song
2020-08-20 16:12         ` Lorenz Bauer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).