Netdev Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper
@ 2021-07-07 21:47 Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 1/7] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

hi,
adding bpf_get_func_ip helper that returns IP address of the
caller function for trampoline and krobe programs.

There're 2 specific implementation of the bpf_get_func_ip
helper, one for trampoline progs and one for kprobe/kretprobe
progs.

The trampoline helper call is replaced/inlined by verifier
with simple move instruction. The kprobe/kretprobe is actual
helper call that returns prepared caller address.

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  bpf/get_func_ip

v3 changes:
  - resend with Masami in cc and v3 in each patch subject

v2 changes:
  - use kprobe_running to get kprobe instead of cpu var [Masami]
  - added support to add kprobe on function+offset
    and test for that [Alan]

thanks,
jirka


---
Alan Maguire (1):
      libbpf: allow specification of "kprobe/function+offset"

Jiri Olsa (6):
      bpf, x86: Store caller's ip in trampoline stack
      bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
      bpf: Add bpf_get_func_ip helper for tracing programs
      bpf: Add bpf_get_func_ip helper for kprobe programs
      selftests/bpf: Add test for bpf_get_func_ip helper
      selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe

 arch/x86/net/bpf_jit_comp.c                               | 19 +++++++++++++++++++
 include/linux/bpf.h                                       |  5 +++++
 include/linux/filter.h                                    |  3 ++-
 include/uapi/linux/bpf.h                                  |  7 +++++++
 kernel/bpf/trampoline.c                                   | 12 +++++++++---
 kernel/bpf/verifier.c                                     | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c                                  | 32 ++++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h                            |  7 +++++++
 tools/lib/bpf/libbpf.c                                    | 20 +++++++++++++++++---
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 11 files changed, 270 insertions(+), 7 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 1/7] bpf, x86: Store caller's ip in trampoline stack
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 2/7] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Storing caller's ip in trampoline's stack. Trampoline programs
can reach the IP in (ctx - 8) address, so there's no change in
program's arguments interface.

The IP address is takes from [fp + 8], which is return address
from the initial 'call fentry' call to trampoline.

This IP address will be returned via bpf_get_func_ip helper
helper, which is added in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 19 +++++++++++++++++++
 include/linux/bpf.h         |  5 +++++
 2 files changed, 24 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e835164189f1..c320b3ce7b58 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1951,6 +1951,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	if (flags & BPF_TRAMP_F_CALL_ORIG)
 		stack_size += 8; /* room for return value of orig_call */
 
+	if (flags & BPF_TRAMP_F_IP_ARG)
+		stack_size += 8; /* room for IP address argument */
+
 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
 		/* skip patched call instruction and point orig_call to actual
 		 * body of the kernel function.
@@ -1964,6 +1967,22 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
 	EMIT1(0x53);		 /* push rbx */
 
+	if (flags & BPF_TRAMP_F_IP_ARG) {
+		/* Store IP address of the traced function:
+		 * mov rax, QWORD PTR [rbp + 8]
+		 * sub rax, X86_PATCH_SIZE
+		 * mov QWORD PTR [rbp - stack_size], rax
+		 */
+		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
+		EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE);
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_size);
+
+		/* Continue with stack_size for regs storage, stack will
+		 * be correctly restored with 'leave' instruction.
+		 */
+		stack_size -= 8;
+	}
+
 	save_regs(m, &prog, nr_args, stack_size);
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f309fc1509f2..6b3da9bc3d16 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -554,6 +554,11 @@ struct btf_func_model {
  */
 #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
 
+/* Store IP address of the caller on the trampoline stack,
+ * so it's available for trampoline's programs.
+ */
+#define BPF_TRAMP_F_IP_ARG		BIT(3)
+
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 2/7] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 1/7] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Enabling BPF_TRAMP_F_IP_ARG for trampolines that actually need it.

The BPF_TRAMP_F_IP_ARG adds extra 3 instructions to trampoline code
and is used only by programs with bpf_get_func_ip helper, which is
added in following patch and sets call_get_func_ip bit.

This patch ensures that BPF_TRAMP_F_IP_ARG flag is used only for
trampolines that have programs with call_get_func_ip set.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/filter.h  |  3 ++-
 kernel/bpf/trampoline.c | 12 +++++++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 472f97074da0..ba36989f711a 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -559,7 +559,8 @@ struct bpf_prog {
 				kprobe_override:1, /* Do we override a kprobe? */
 				has_callchain_buf:1, /* callchain buffer allocated? */
 				enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */
-				call_get_stack:1; /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_func_ip:1; /* Do we call get_func_ip() */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 28a3630c48ee..b2535acfe9db 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -172,7 +172,7 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 }
 
 static struct bpf_tramp_progs *
-bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
+bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
 	const struct bpf_prog_aux *aux;
 	struct bpf_tramp_progs *tprogs;
@@ -189,8 +189,10 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
 		*total += tr->progs_cnt[kind];
 		progs = tprogs[kind].progs;
 
-		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist)
+		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist) {
+			*ip_arg |= aux->prog->call_get_func_ip;
 			*progs++ = aux->prog;
+		}
 	}
 	return tprogs;
 }
@@ -333,9 +335,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	struct bpf_tramp_image *im;
 	struct bpf_tramp_progs *tprogs;
 	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+	bool ip_arg = false;
 	int err, total;
 
-	tprogs = bpf_trampoline_get_progs(tr, &total);
+	tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg);
 	if (IS_ERR(tprogs))
 		return PTR_ERR(tprogs);
 
@@ -357,6 +360,9 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
+	if (ip_arg)
+		flags |= BPF_TRAMP_F_IP_ARG;
+
 	err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
 					  &tr->func.model, flags, tprogs,
 					  tr->func.addr);
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 1/7] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 2/7] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-08  0:06   ` Andrii Nakryiko
  2021-07-08  2:11   ` Alexei Starovoitov
  2021-07-07 21:47 ` [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: kernel test robot, Dan Carpenter, netdev, bpf, Martin KaFai Lau,
	Song Liu, Yonghong Song, John Fastabend, KP Singh,
	Masami Hiramatsu, Alan Maguire

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
specifically for all trampoline attach types.

The trampoline's caller IP address is stored in (ctx - 8) address.
so there's no reason to actually call the helper, but rather fixup
the call instruction and return [ctx - 8] value directly (suggested
by Alexei).

[fixed has_get_func_ip wrong return type]
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  7 +++++
 kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c       | 15 ++++++++++
 tools/include/uapi/linux/bpf.h |  7 +++++
 4 files changed, 82 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index bf9252c7381e..83e87ffdbb6e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4780,6 +4780,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4951,6 +4957,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index be38bb930bf1..f975a3aa9368 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5149,6 +5149,11 @@ static bool allow_tail_call_in_subprogs(struct bpf_verifier_env *env)
 	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
 }
 
+static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)
+{
+	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
+}
+
 static int check_map_func_compatibility(struct bpf_verifier_env *env,
 					struct bpf_map *map, int func_id)
 {
@@ -5955,6 +5960,32 @@ static int check_bpf_snprintf_call(struct bpf_verifier_env *env,
 	return err;
 }
 
+static int has_get_func_ip(struct bpf_verifier_env *env)
+{
+	enum bpf_attach_type eatype = env->prog->expected_attach_type;
+	enum bpf_prog_type type = resolve_prog_type(env->prog);
+	int func_id = BPF_FUNC_get_func_ip;
+
+	if (type == BPF_PROG_TYPE_TRACING) {
+		if (eatype != BPF_TRACE_FENTRY && eatype != BPF_TRACE_FEXIT &&
+		    eatype != BPF_MODIFY_RETURN) {
+			verbose(env, "func %s#%d supported only for fentry/fexit/fmod_ret programs\n",
+				func_id_name(func_id), func_id);
+			return -ENOTSUPP;
+		}
+		if (!allow_get_func_ip_tracing(env)) {
+			verbose(env, "func %s#%d for tracing programs supported only for JITed x86_64\n",
+				func_id_name(func_id), func_id);
+			return -ENOTSUPP;
+		}
+		return 0;
+	}
+
+	verbose(env, "func %s#%d not supported for program type %d\n",
+		func_id_name(func_id), func_id, type);
+	return -ENOTSUPP;
+}
+
 static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			     int *insn_idx_p)
 {
@@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
 		env->prog->call_get_stack = true;
 
+	if (func_id == BPF_FUNC_get_func_ip) {
+		if (has_get_func_ip(env))
+			return -ENOTSUPP;
+		env->prog->call_get_func_ip = true;
+	}
+
 	if (changes_data)
 		clear_all_pkt_pointers(env);
 	return 0;
@@ -12369,6 +12406,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
 	bool expect_blinding = bpf_jit_blinding_enabled(prog);
+	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	struct bpf_insn *insn = prog->insnsi;
 	const struct bpf_func_proto *fn;
 	const int insn_cnt = prog->len;
@@ -12702,6 +12740,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+		/* Implement bpf_get_func_ip inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_ip) {
+			/* Load IP address from ctx - 8 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
+			if (!new_prog)
+				return -ENOMEM;
+
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
 patch_call_imm:
 		fn = env->ops->get_func_proto(insn->imm, env->prog);
 		/* all functions that have prototype and verifier allowed
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 64bd2d84367f..9edd3b1a00ad 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
 	.arg5_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
+{
+	/* Stub, the helper call is inlined in the program. */
+	return 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
+	.func		= bpf_get_func_ip_tracing,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1058,6 +1071,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_for_each_map_elem_proto;
 	case BPF_FUNC_snprintf:
 		return &bpf_snprintf_proto;
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_tracing;
 	default:
 		return NULL;
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index bf9252c7381e..83e87ffdbb6e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4780,6 +4780,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4951,6 +4957,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (2 preceding siblings ...)
  2021-07-07 21:47 ` [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-10  7:55   ` Masami Hiramatsu
  2021-07-07 21:47 ` [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
so it's now possible to call bpf_get_func_ip from both kprobe and
kretprobe programs.

Taking the caller's address from 'struct kprobe::addr', which is
defined for both kprobe and kretprobe.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  2 +-
 kernel/bpf/verifier.c          |  2 ++
 kernel/trace/bpf_trace.c       | 17 +++++++++++++++++
 tools/include/uapi/linux/bpf.h |  2 +-
 4 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 83e87ffdbb6e..4894f99a1993 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4783,7 +4783,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f975a3aa9368..79eb9d81a198 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5979,6 +5979,8 @@ static int has_get_func_ip(struct bpf_verifier_env *env)
 			return -ENOTSUPP;
 		}
 		return 0;
+	} else if (type == BPF_PROG_TYPE_KPROBE) {
+		return 0;
 	}
 
 	verbose(env, "func %s#%d not supported for program type %d\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9edd3b1a00ad..55acf56b0c3a 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -17,6 +17,7 @@
 #include <linux/error-injection.h>
 #include <linux/btf_ids.h>
 #include <linux/bpf_lsm.h>
+#include <linux/kprobes.h>
 
 #include <net/bpf_sk_storage.h>
 
@@ -961,6 +962,20 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
 	.arg1_type	= ARG_PTR_TO_CTX,
 };
 
+BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
+{
+	struct kprobe *kp = kprobe_running();
+
+	return kp ? (u64) kp->addr : 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
+	.func		= bpf_get_func_ip_kprobe,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1092,6 +1107,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_override_return:
 		return &bpf_override_return_proto;
 #endif
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_kprobe;
 	default:
 		return bpf_tracing_func_proto(func_id, prog);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 83e87ffdbb6e..4894f99a1993 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4783,7 +4783,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (3 preceding siblings ...)
  2021-07-07 21:47 ` [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-08  0:12   ` Andrii Nakryiko
  2021-07-07 21:47 ` [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset" Jiri Olsa
  2021-07-07 21:47 ` [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
  6 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding test for bpf_get_func_ip helper for fentry, fexit,
kprobe, kretprobe and fmod_ret programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++
 .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++
 2 files changed, 104 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
new file mode 100644
index 000000000000..06d34f566bbb
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "get_func_ip_test.skel.h"
+
+void test_get_func_ip_test(void)
+{
+	struct get_func_ip_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd, i;
+	__u64 *result;
+
+	skel = get_func_ip_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open_and_load"))
+		goto cleanup;
+
+	err = get_func_ip_test__attach(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+
+	ASSERT_OK(err, "test_run");
+
+	result = (__u64 *)skel->bss;
+	for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {
+		if (!ASSERT_EQ(result[i], 1, "fentry_result"))
+			break;
+	}
+
+	get_func_ip_test__detach(skel);
+
+cleanup:
+	get_func_ip_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
new file mode 100644
index 000000000000..8ca54390d2b1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_modify_return_test __ksym;
+
+__u64 test1_result = 0;
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test1_result = (const void *) addr == &bpf_fentry_test1;
+	return 0;
+}
+
+__u64 test2_result = 0;
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test2_result = (const void *) addr == &bpf_fentry_test2;
+	return 0;
+}
+
+__u64 test3_result = 0;
+SEC("kprobe/bpf_fentry_test3")
+int test3(struct pt_regs *ctx)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test3_result = (const void *) addr == &bpf_fentry_test3;
+	return 0;
+}
+
+__u64 test4_result = 0;
+SEC("kretprobe/bpf_fentry_test4")
+int BPF_KRETPROBE(test4)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test4_result = (const void *) addr == &bpf_fentry_test4;
+	return 0;
+}
+
+__u64 test5_result = 0;
+SEC("fmod_ret/bpf_modify_return_test")
+int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test5_result = (const void *) addr == &bpf_modify_return_test;
+	return ret;
+}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset"
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (4 preceding siblings ...)
  2021-07-07 21:47 ` [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-08  0:14   ` Andrii Nakryiko
  2021-07-07 21:47 ` [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
  6 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

From: Alan Maguire <alan.maguire@oracle.com>

kprobes can be placed on most instructions in a function, not
just entry, and ftrace and bpftrace support the function+offset
notification for probe placement.  Adding parsing of func_name
into func+offset to bpf_program__attach_kprobe() allows the
user to specify

SEC("kprobe/bpf_fentry_test5+0x6")

...for example, and the offset can be passed to perf_event_open_probe()
to support kprobe attachment.

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/libbpf.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 1e04ce724240..60c9e3e77684 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -10309,11 +10309,25 @@ struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
 					    const char *func_name)
 {
 	char errmsg[STRERR_BUFSIZE];
+	char func[BPF_OBJ_NAME_LEN];
+	unsigned long offset = 0;
 	struct bpf_link *link;
-	int pfd, err;
+	int pfd, err, n;
+
+	n = sscanf(func_name, "%[a-zA-Z0-9_.]+%lx", func, &offset);
+	if (n < 1) {
+		err = -EINVAL;
+		pr_warn("kprobe name is invalid: %s\n", func_name);
+		return libbpf_err_ptr(err);
+	}
+	if (retprobe && offset != 0) {
+		err = -EINVAL;
+		pr_warn("kretprobes do not support offset specification\n");
+		return libbpf_err_ptr(err);
+	}
 
-	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
-				    0 /* offset */, -1 /* pid */);
+	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func,
+				    offset, -1 /* pid */);
 	if (pfd < 0) {
 		pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
 			prog->name, retprobe ? "kretprobe" : "kprobe", func_name,
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (5 preceding siblings ...)
  2021-07-07 21:47 ` [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset" Jiri Olsa
@ 2021-07-07 21:47 ` Jiri Olsa
  2021-07-08  0:18   ` Andrii Nakryiko
  6 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2021-07-07 21:47 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding test for bpf_get_func_ip in kprobe+ofset probe.
Because of the offset value it's arch specific, adding
it only for x86_64 architecture.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index 8ca54390d2b1..e8a9428a0ea3 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
 extern const void bpf_fentry_test3 __ksym;
 extern const void bpf_fentry_test4 __ksym;
 extern const void bpf_modify_return_test __ksym;
+extern const void bpf_fentry_test6 __ksym;
 
 __u64 test1_result = 0;
 SEC("fentry/bpf_fentry_test1")
@@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
 	test5_result = (const void *) addr == &bpf_modify_return_test;
 	return ret;
 }
+
+#ifdef __x86_64__
+__u64 test6_result = 0;
+SEC("kprobe/bpf_fentry_test6+0x5")
+int test6(struct pt_regs *ctx)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
+	return 0;
+}
+#endif
-- 
2.31.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-07 21:47 ` [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
@ 2021-07-08  0:06   ` Andrii Nakryiko
  2021-07-11 14:48     ` Jiri Olsa
  2021-07-08  2:11   ` Alexei Starovoitov
  1 sibling, 1 reply; 22+ messages in thread
From: Andrii Nakryiko @ 2021-07-08  0:06 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, Networking, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 7, 2021 at 2:53 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
> specifically for all trampoline attach types.
>
> The trampoline's caller IP address is stored in (ctx - 8) address.
> so there's no reason to actually call the helper, but rather fixup
> the call instruction and return [ctx - 8] value directly (suggested
> by Alexei).
>
> [fixed has_get_func_ip wrong return type]
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/uapi/linux/bpf.h       |  7 +++++
>  kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c       | 15 ++++++++++
>  tools/include/uapi/linux/bpf.h |  7 +++++
>  4 files changed, 82 insertions(+)
>

[...]

>  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                              int *insn_idx_p)
>  {
> @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>         if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
>                 env->prog->call_get_stack = true;
>
> +       if (func_id == BPF_FUNC_get_func_ip) {
> +               if (has_get_func_ip(env))

from has_xxx name I'd expect it returns true/false, so this reads
super confusing. check_get_func_ip would be a bit more consistent with
other cases like this (still reads confusing to me, but that's ok)

> +                       return -ENOTSUPP;
> +               env->prog->call_get_func_ip = true;
> +       }
> +
>         if (changes_data)
>                 clear_all_pkt_pointers(env);
>         return 0;

[...]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper
  2021-07-07 21:47 ` [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
@ 2021-07-08  0:12   ` Andrii Nakryiko
  2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andrii Nakryiko @ 2021-07-08  0:12 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding test for bpf_get_func_ip helper for fentry, fexit,
> kprobe, kretprobe and fmod_ret programs.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++
>  .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++
>  2 files changed, 104 insertions(+)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
>  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c
>

[...]

> +       ASSERT_OK(err, "test_run");
> +
> +       result = (__u64 *)skel->bss;
> +       for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {
> +               if (!ASSERT_EQ(result[i], 1, "fentry_result"))
> +                       break;
> +       }

I dislike such generic loop over results. It's super error prone and
takes the same 5 lines of code that you'd write for explicit

ASSERT_EQ(testX_result, 1, "testX_result"); /* 5 times */

> +
> +       get_func_ip_test__detach(skel);

no need to explicitly detach, __destroy does that automatically

> +
> +cleanup:
> +       get_func_ip_test__destroy(skel);
> +}

[...]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset"
  2021-07-07 21:47 ` [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset" Jiri Olsa
@ 2021-07-08  0:14   ` Andrii Nakryiko
  2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andrii Nakryiko @ 2021-07-08  0:14 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> From: Alan Maguire <alan.maguire@oracle.com>
>
> kprobes can be placed on most instructions in a function, not
> just entry, and ftrace and bpftrace support the function+offset
> notification for probe placement.  Adding parsing of func_name
> into func+offset to bpf_program__attach_kprobe() allows the
> user to specify
>
> SEC("kprobe/bpf_fentry_test5+0x6")
>
> ...for example, and the offset can be passed to perf_event_open_probe()
> to support kprobe attachment.
>
> Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/libbpf.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 1e04ce724240..60c9e3e77684 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -10309,11 +10309,25 @@ struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
>                                             const char *func_name)

I think we should add bpf_program__attach_kprobe_opts instead for the
programmatic API instead of parsing it here from func_name. It's a
cumbersome API.

Parsing SEC() is fine, of course, but then it has to call into
bpf_program__attach_kprobe_opts() internally.

>  {
>         char errmsg[STRERR_BUFSIZE];
> +       char func[BPF_OBJ_NAME_LEN];
> +       unsigned long offset = 0;
>         struct bpf_link *link;
> -       int pfd, err;
> +       int pfd, err, n;
> +
> +       n = sscanf(func_name, "%[a-zA-Z0-9_.]+%lx", func, &offset);
> +       if (n < 1) {
> +               err = -EINVAL;
> +               pr_warn("kprobe name is invalid: %s\n", func_name);
> +               return libbpf_err_ptr(err);
> +       }
> +       if (retprobe && offset != 0) {
> +               err = -EINVAL;
> +               pr_warn("kretprobes do not support offset specification\n");
> +               return libbpf_err_ptr(err);
> +       }
>
> -       pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
> -                                   0 /* offset */, -1 /* pid */);
> +       pfd = perf_event_open_probe(false /* uprobe */, retprobe, func,
> +                                   offset, -1 /* pid */);
>         if (pfd < 0) {
>                 pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
>                         prog->name, retprobe ? "kretprobe" : "kprobe", func_name,
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-07 21:47 ` [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
@ 2021-07-08  0:18   ` Andrii Nakryiko
  2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andrii Nakryiko @ 2021-07-08  0:18 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding test for bpf_get_func_ip in kprobe+ofset probe.

typo: offset

> Because of the offset value it's arch specific, adding
> it only for x86_64 architecture.

I'm not following, you specified +0x5 offset explicitly, why is this
arch-specific?

>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
>
> diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> index 8ca54390d2b1..e8a9428a0ea3 100644
> --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
>  extern const void bpf_fentry_test3 __ksym;
>  extern const void bpf_fentry_test4 __ksym;
>  extern const void bpf_modify_return_test __ksym;
> +extern const void bpf_fentry_test6 __ksym;
>
>  __u64 test1_result = 0;
>  SEC("fentry/bpf_fentry_test1")
> @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
>         test5_result = (const void *) addr == &bpf_modify_return_test;
>         return ret;
>  }
> +
> +#ifdef __x86_64__
> +__u64 test6_result = 0;

see, and you just forgot to update the user-space part of the test to
even check test6_result...

please group variables together and do explicit ASSERT_EQ

> +SEC("kprobe/bpf_fentry_test6+0x5")
> +int test6(struct pt_regs *ctx)
> +{
> +       __u64 addr = bpf_get_func_ip(ctx);
> +
> +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
> +       return 0;
> +}
> +#endif
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-07 21:47 ` [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
  2021-07-08  0:06   ` Andrii Nakryiko
@ 2021-07-08  2:11   ` Alexei Starovoitov
  2021-07-11 14:48     ` Jiri Olsa
  1 sibling, 1 reply; 22+ messages in thread
From: Alexei Starovoitov @ 2021-07-08  2:11 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, netdev, bpf, Martin KaFai Lau,
	Song Liu, Yonghong Song, John Fastabend, KP Singh,
	Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 11:47:47PM +0200, Jiri Olsa wrote:
>  
> +static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)
> +{
> +	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);

Why does it have to be gated by 'jited && x86_64' ?
It's gated by bpf trampoline and it's only implemented on x86_64 so far.
The trampoline has plenty of features. I would expect bpf trampoline
for arm64 to implement all of them. If not the func_ip would be just
one of the trampoline features that couldn't be implemented and at that
time we'd need a flag mask of a sort, but I'd rather push of feature
equivalence between trampoline implementations.

Then jited part also doesn't seem to be necessary.
The trampoline passed pointer to a stack in R1.
Interpreter should deal with BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8) insn
the same way and it should work, since trampoline prepared it.
What did I miss?

> +static int has_get_func_ip(struct bpf_verifier_env *env)
> +{
> +	enum bpf_attach_type eatype = env->prog->expected_attach_type;
> +	enum bpf_prog_type type = resolve_prog_type(env->prog);
> +	int func_id = BPF_FUNC_get_func_ip;
> +
> +	if (type == BPF_PROG_TYPE_TRACING) {
> +		if (eatype != BPF_TRACE_FENTRY && eatype != BPF_TRACE_FEXIT &&
> +		    eatype != BPF_MODIFY_RETURN) {
> +			verbose(env, "func %s#%d supported only for fentry/fexit/fmod_ret programs\n",
> +				func_id_name(func_id), func_id);
> +			return -ENOTSUPP;
> +		}
> +		if (!allow_get_func_ip_tracing(env)) {
> +			verbose(env, "func %s#%d for tracing programs supported only for JITed x86_64\n",
> +				func_id_name(func_id), func_id);
> +			return -ENOTSUPP;
> +		}
> +		return 0;
> +	}
> +
> +	verbose(env, "func %s#%d not supported for program type %d\n",
> +		func_id_name(func_id), func_id, type);
> +	return -ENOTSUPP;
> +}
> +
>  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  			     int *insn_idx_p)
>  {
> @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  	if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
>  		env->prog->call_get_stack = true;
>  
> +	if (func_id == BPF_FUNC_get_func_ip) {
> +		if (has_get_func_ip(env))
> +			return -ENOTSUPP;
> +		env->prog->call_get_func_ip = true;
> +	}
> +
>  	if (changes_data)
>  		clear_all_pkt_pointers(env);
>  	return 0;
> @@ -12369,6 +12406,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>  {
>  	struct bpf_prog *prog = env->prog;
>  	bool expect_blinding = bpf_jit_blinding_enabled(prog);
> +	enum bpf_prog_type prog_type = resolve_prog_type(prog);
>  	struct bpf_insn *insn = prog->insnsi;
>  	const struct bpf_func_proto *fn;
>  	const int insn_cnt = prog->len;
> @@ -12702,6 +12740,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>  			continue;
>  		}
>  
> +		/* Implement bpf_get_func_ip inline. */
> +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> +		    insn->imm == BPF_FUNC_get_func_ip) {
> +			/* Load IP address from ctx - 8 */
> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> +			if (!new_prog)
> +				return -ENOMEM;
> +
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}
> +
>  patch_call_imm:
>  		fn = env->ops->get_func_proto(insn->imm, env->prog);
>  		/* all functions that have prototype and verifier allowed
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 64bd2d84367f..9edd3b1a00ad 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
>  	.arg5_type	= ARG_ANYTHING,
>  };
>  
> +BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
> +{
> +	/* Stub, the helper call is inlined in the program. */
> +	return 0;
> +}

may be add a WARN in here that it should never be executed ?
Or may be add an actual implementation:
 return ((u64 *)ctx)[-1];
and check that it works without inlining by the verifier?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-07 21:47 ` [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
@ 2021-07-10  7:55   ` Masami Hiramatsu
  2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Masami Hiramatsu @ 2021-07-10  7:55 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed,  7 Jul 2021 23:47:48 +0200
Jiri Olsa <jolsa@redhat.com> wrote:

> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> so it's now possible to call bpf_get_func_ip from both kprobe and
> kretprobe programs.
> 
> Taking the caller's address from 'struct kprobe::addr', which is
> defined for both kprobe and kretprobe.
> 

Note that the kp->addr of kretprobe will be the callee function
address, even if the handler is called when the end of the callee.

Anyway, this looks good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you,

> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  include/uapi/linux/bpf.h       |  2 +-
>  kernel/bpf/verifier.c          |  2 ++
>  kernel/trace/bpf_trace.c       | 17 +++++++++++++++++
>  tools/include/uapi/linux/bpf.h |  2 +-
>  4 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 83e87ffdbb6e..4894f99a1993 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -4783,7 +4783,7 @@ union bpf_attr {
>   *
>   * u64 bpf_get_func_ip(void *ctx)
>   * 	Description
> - * 		Get address of the traced function (for tracing programs).
> + * 		Get address of the traced function (for tracing and kprobe programs).
>   * 	Return
>   * 		Address of the traced function.
>   */
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index f975a3aa9368..79eb9d81a198 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5979,6 +5979,8 @@ static int has_get_func_ip(struct bpf_verifier_env *env)
>  			return -ENOTSUPP;
>  		}
>  		return 0;
> +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> +		return 0;
>  	}
>  
>  	verbose(env, "func %s#%d not supported for program type %d\n",
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 9edd3b1a00ad..55acf56b0c3a 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -17,6 +17,7 @@
>  #include <linux/error-injection.h>
>  #include <linux/btf_ids.h>
>  #include <linux/bpf_lsm.h>
> +#include <linux/kprobes.h>
>  
>  #include <net/bpf_sk_storage.h>
>  
> @@ -961,6 +962,20 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
>  	.arg1_type	= ARG_PTR_TO_CTX,
>  };
>  
> +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> +{
> +	struct kprobe *kp = kprobe_running();
> +
> +	return kp ? (u64) kp->addr : 0;
> +}
> +
> +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> +	.func		= bpf_get_func_ip_kprobe,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +};
> +
>  const struct bpf_func_proto *
>  bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  {
> @@ -1092,6 +1107,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>  	case BPF_FUNC_override_return:
>  		return &bpf_override_return_proto;
>  #endif
> +	case BPF_FUNC_get_func_ip:
> +		return &bpf_get_func_ip_proto_kprobe;
>  	default:
>  		return bpf_tracing_func_proto(func_id, prog);
>  	}
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 83e87ffdbb6e..4894f99a1993 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -4783,7 +4783,7 @@ union bpf_attr {
>   *
>   * u64 bpf_get_func_ip(void *ctx)
>   * 	Description
> - * 		Get address of the traced function (for tracing programs).
> + * 		Get address of the traced function (for tracing and kprobe programs).
>   * 	Return
>   * 		Address of the traced function.
>   */
> -- 
> 2.31.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-08  2:11   ` Alexei Starovoitov
@ 2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, netdev, bpf, Martin KaFai Lau,
	Song Liu, Yonghong Song, John Fastabend, KP Singh,
	Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 07:11:23PM -0700, Alexei Starovoitov wrote:
> On Wed, Jul 07, 2021 at 11:47:47PM +0200, Jiri Olsa wrote:
> >  
> > +static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)
> > +{
> > +	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
> 
> Why does it have to be gated by 'jited && x86_64' ?
> It's gated by bpf trampoline and it's only implemented on x86_64 so far.
> The trampoline has plenty of features. I would expect bpf trampoline
> for arm64 to implement all of them. If not the func_ip would be just
> one of the trampoline features that couldn't be implemented and at that
> time we'd need a flag mask of a sort, but I'd rather push of feature
> equivalence between trampoline implementations.

ok, check for trampoline's prog types should be enough

> 
> Then jited part also doesn't seem to be necessary.
> The trampoline passed pointer to a stack in R1.
> Interpreter should deal with BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8) insn
> the same way and it should work, since trampoline prepared it.
> What did I miss?

ah right.. will remove that

SNIP

> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 64bd2d84367f..9edd3b1a00ad 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
> >  	.arg5_type	= ARG_ANYTHING,
> >  };
> >  
> > +BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
> > +{
> > +	/* Stub, the helper call is inlined in the program. */
> > +	return 0;
> > +}
> 
> may be add a WARN in here that it should never be executed ?
> Or may be add an actual implementation:
>  return ((u64 *)ctx)[-1];
> and check that it works without inlining by the verifier?
> 

sure, but having tracing program with this helper, it will be
always inlined, right? I can't see how it could be skipped

thanks,
jirka


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-10  7:55   ` Masami Hiramatsu
@ 2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Alan Maguire

On Sat, Jul 10, 2021 at 04:55:12PM +0900, Masami Hiramatsu wrote:
> On Wed,  7 Jul 2021 23:47:48 +0200
> Jiri Olsa <jolsa@redhat.com> wrote:
> 
> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> > so it's now possible to call bpf_get_func_ip from both kprobe and
> > kretprobe programs.
> > 
> > Taking the caller's address from 'struct kprobe::addr', which is
> > defined for both kprobe and kretprobe.
> > 
> 
> Note that the kp->addr of kretprobe will be the callee function
> address, even if the handler is called when the end of the callee.

yes, that's what we need

> 
> Anyway, this looks good to me.
> 
> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

thanks,
jirka

> 
> Thank you,
> 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/uapi/linux/bpf.h       |  2 +-
> >  kernel/bpf/verifier.c          |  2 ++
> >  kernel/trace/bpf_trace.c       | 17 +++++++++++++++++
> >  tools/include/uapi/linux/bpf.h |  2 +-
> >  4 files changed, 21 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 83e87ffdbb6e..4894f99a1993 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4783,7 +4783,7 @@ union bpf_attr {
> >   *
> >   * u64 bpf_get_func_ip(void *ctx)
> >   * 	Description
> > - * 		Get address of the traced function (for tracing programs).
> > + * 		Get address of the traced function (for tracing and kprobe programs).
> >   * 	Return
> >   * 		Address of the traced function.
> >   */
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index f975a3aa9368..79eb9d81a198 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -5979,6 +5979,8 @@ static int has_get_func_ip(struct bpf_verifier_env *env)
> >  			return -ENOTSUPP;
> >  		}
> >  		return 0;
> > +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> > +		return 0;
> >  	}
> >  
> >  	verbose(env, "func %s#%d not supported for program type %d\n",
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 9edd3b1a00ad..55acf56b0c3a 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -17,6 +17,7 @@
> >  #include <linux/error-injection.h>
> >  #include <linux/btf_ids.h>
> >  #include <linux/bpf_lsm.h>
> > +#include <linux/kprobes.h>
> >  
> >  #include <net/bpf_sk_storage.h>
> >  
> > @@ -961,6 +962,20 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
> >  	.arg1_type	= ARG_PTR_TO_CTX,
> >  };
> >  
> > +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> > +{
> > +	struct kprobe *kp = kprobe_running();
> > +
> > +	return kp ? (u64) kp->addr : 0;
> > +}
> > +
> > +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> > +	.func		= bpf_get_func_ip_kprobe,
> > +	.gpl_only	= true,
> > +	.ret_type	= RET_INTEGER,
> > +	.arg1_type	= ARG_PTR_TO_CTX,
> > +};
> > +
> >  const struct bpf_func_proto *
> >  bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  {
> > @@ -1092,6 +1107,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >  	case BPF_FUNC_override_return:
> >  		return &bpf_override_return_proto;
> >  #endif
> > +	case BPF_FUNC_get_func_ip:
> > +		return &bpf_get_func_ip_proto_kprobe;
> >  	default:
> >  		return bpf_tracing_func_proto(func_id, prog);
> >  	}
> > diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> > index 83e87ffdbb6e..4894f99a1993 100644
> > --- a/tools/include/uapi/linux/bpf.h
> > +++ b/tools/include/uapi/linux/bpf.h
> > @@ -4783,7 +4783,7 @@ union bpf_attr {
> >   *
> >   * u64 bpf_get_func_ip(void *ctx)
> >   * 	Description
> > - * 		Get address of the traced function (for tracing programs).
> > + * 		Get address of the traced function (for tracing and kprobe programs).
> >   * 	Return
> >   * 		Address of the traced function.
> >   */
> > -- 
> > 2.31.1
> > 
> 
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-08  0:18   ` Andrii Nakryiko
@ 2021-07-11 14:48     ` Jiri Olsa
  2021-07-12 23:32       ` Andrii Nakryiko
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding test for bpf_get_func_ip in kprobe+ofset probe.
> 
> typo: offset
> 
> > Because of the offset value it's arch specific, adding
> > it only for x86_64 architecture.
> 
> I'm not following, you specified +0x5 offset explicitly, why is this
> arch-specific?

I need some instruction offset != 0 in the traced function,
x86_64's fentry jump is 5 bytes, other archs will be different

> 
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> >
> > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > index 8ca54390d2b1..e8a9428a0ea3 100644
> > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
> >  extern const void bpf_fentry_test3 __ksym;
> >  extern const void bpf_fentry_test4 __ksym;
> >  extern const void bpf_modify_return_test __ksym;
> > +extern const void bpf_fentry_test6 __ksym;
> >
> >  __u64 test1_result = 0;
> >  SEC("fentry/bpf_fentry_test1")
> > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
> >         test5_result = (const void *) addr == &bpf_modify_return_test;
> >         return ret;
> >  }
> > +
> > +#ifdef __x86_64__
> > +__u64 test6_result = 0;
> 
> see, and you just forgot to update the user-space part of the test to
> even check test6_result...
> 
> please group variables together and do explicit ASSERT_EQ

right.. will change

thanks,
jirka

> 
> > +SEC("kprobe/bpf_fentry_test6+0x5")
> > +int test6(struct pt_regs *ctx)
> > +{
> > +       __u64 addr = bpf_get_func_ip(ctx);
> > +
> > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
> > +       return 0;
> > +}
> > +#endif
> > --
> > 2.31.1
> >
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-08  0:06   ` Andrii Nakryiko
@ 2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, Networking, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 05:06:17PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:53 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
> > specifically for all trampoline attach types.
> >
> > The trampoline's caller IP address is stored in (ctx - 8) address.
> > so there's no reason to actually call the helper, but rather fixup
> > the call instruction and return [ctx - 8] value directly (suggested
> > by Alexei).
> >
> > [fixed has_get_func_ip wrong return type]
> > Reported-by: kernel test robot <lkp@intel.com>
> > Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  include/uapi/linux/bpf.h       |  7 +++++
> >  kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++
> >  kernel/trace/bpf_trace.c       | 15 ++++++++++
> >  tools/include/uapi/linux/bpf.h |  7 +++++
> >  4 files changed, 82 insertions(+)
> >
> 
> [...]
> 
> >  static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> >                              int *insn_idx_p)
> >  {
> > @@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >         if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
> >                 env->prog->call_get_stack = true;
> >
> > +       if (func_id == BPF_FUNC_get_func_ip) {
> > +               if (has_get_func_ip(env))
> 
> from has_xxx name I'd expect it returns true/false, so this reads
> super confusing. check_get_func_ip would be a bit more consistent with
> other cases like this (still reads confusing to me, but that's ok)

ok, will change

jirka

> 
> > +                       return -ENOTSUPP;
> > +               env->prog->call_get_func_ip = true;
> > +       }
> > +
> >         if (changes_data)
> >                 clear_all_pkt_pointers(env);
> >         return 0;
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper
  2021-07-08  0:12   ` Andrii Nakryiko
@ 2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 05:12:07PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding test for bpf_get_func_ip helper for fentry, fexit,
> > kprobe, kretprobe and fmod_ret programs.
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++
> >  .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++
> >  2 files changed, 104 insertions(+)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c
> >
> 
> [...]
> 
> > +       ASSERT_OK(err, "test_run");
> > +
> > +       result = (__u64 *)skel->bss;
> > +       for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {
> > +               if (!ASSERT_EQ(result[i], 1, "fentry_result"))
> > +                       break;
> > +       }
> 
> I dislike such generic loop over results. It's super error prone and
> takes the same 5 lines of code that you'd write for explicit
> 
> ASSERT_EQ(testX_result, 1, "testX_result"); /* 5 times */

ok

> 
> > +
> > +       get_func_ip_test__detach(skel);
> 
> no need to explicitly detach, __destroy does that automatically

I see, will remove that

thanks,
jirka

> 
> > +
> > +cleanup:
> > +       get_func_ip_test__destroy(skel);
> > +}
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset"
  2021-07-08  0:14   ` Andrii Nakryiko
@ 2021-07-11 14:48     ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-11 14:48 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 07, 2021 at 05:14:20PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > From: Alan Maguire <alan.maguire@oracle.com>
> >
> > kprobes can be placed on most instructions in a function, not
> > just entry, and ftrace and bpftrace support the function+offset
> > notification for probe placement.  Adding parsing of func_name
> > into func+offset to bpf_program__attach_kprobe() allows the
> > user to specify
> >
> > SEC("kprobe/bpf_fentry_test5+0x6")
> >
> > ...for example, and the offset can be passed to perf_event_open_probe()
> > to support kprobe attachment.
> >
> > Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/libbpf.c | 20 +++++++++++++++++---
> >  1 file changed, 17 insertions(+), 3 deletions(-)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 1e04ce724240..60c9e3e77684 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -10309,11 +10309,25 @@ struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> >                                             const char *func_name)
> 
> I think we should add bpf_program__attach_kprobe_opts instead for the
> programmatic API instead of parsing it here from func_name. It's a
> cumbersome API.
> 
> Parsing SEC() is fine, of course, but then it has to call into
> bpf_program__attach_kprobe_opts() internally.

ok, Alan, will you make the change, or should I do that?

thanks,
jirka

> 
> >  {
> >         char errmsg[STRERR_BUFSIZE];
> > +       char func[BPF_OBJ_NAME_LEN];
> > +       unsigned long offset = 0;
> >         struct bpf_link *link;
> > -       int pfd, err;
> > +       int pfd, err, n;
> > +
> > +       n = sscanf(func_name, "%[a-zA-Z0-9_.]+%lx", func, &offset);
> > +       if (n < 1) {
> > +               err = -EINVAL;
> > +               pr_warn("kprobe name is invalid: %s\n", func_name);
> > +               return libbpf_err_ptr(err);
> > +       }
> > +       if (retprobe && offset != 0) {
> > +               err = -EINVAL;
> > +               pr_warn("kretprobes do not support offset specification\n");
> > +               return libbpf_err_ptr(err);
> > +       }
> >
> > -       pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
> > -                                   0 /* offset */, -1 /* pid */);
> > +       pfd = perf_event_open_probe(false /* uprobe */, retprobe, func,
> > +                                   offset, -1 /* pid */);
> >         if (pfd < 0) {
> >                 pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
> >                         prog->name, retprobe ? "kretprobe" : "kprobe", func_name,
> > --
> > 2.31.1
> >
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-11 14:48     ` Jiri Olsa
@ 2021-07-12 23:32       ` Andrii Nakryiko
  2021-07-13 14:15         ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andrii Nakryiko @ 2021-07-12 23:32 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Sun, Jul 11, 2021 at 7:48 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:
> > On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > Adding test for bpf_get_func_ip in kprobe+ofset probe.
> >
> > typo: offset
> >
> > > Because of the offset value it's arch specific, adding
> > > it only for x86_64 architecture.
> >
> > I'm not following, you specified +0x5 offset explicitly, why is this
> > arch-specific?
>
> I need some instruction offset != 0 in the traced function,
> x86_64's fentry jump is 5 bytes, other archs will be different

Right, ok. I don't see an easy way to detect this offset, but the
#ifdef __x86_64__ detection doesn't work because we are compiling with
-target bpf. Please double-check that it actually worked in the first
place.

I think a better way would be to have test6 defined unconditionally in
BPF code, but then disable loading test6 program on anything but
x86_64 platform at runtime with bpf_program__set_autoload(false).

>
> >
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++
> > >  1 file changed, 13 insertions(+)
> > >
> > > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > index 8ca54390d2b1..e8a9428a0ea3 100644
> > > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
> > >  extern const void bpf_fentry_test3 __ksym;
> > >  extern const void bpf_fentry_test4 __ksym;
> > >  extern const void bpf_modify_return_test __ksym;
> > > +extern const void bpf_fentry_test6 __ksym;
> > >
> > >  __u64 test1_result = 0;
> > >  SEC("fentry/bpf_fentry_test1")
> > > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
> > >         test5_result = (const void *) addr == &bpf_modify_return_test;
> > >         return ret;
> > >  }
> > > +
> > > +#ifdef __x86_64__
> > > +__u64 test6_result = 0;
> >
> > see, and you just forgot to update the user-space part of the test to
> > even check test6_result...
> >
> > please group variables together and do explicit ASSERT_EQ
>
> right.. will change
>
> thanks,
> jirka
>
> >
> > > +SEC("kprobe/bpf_fentry_test6+0x5")
> > > +int test6(struct pt_regs *ctx)
> > > +{
> > > +       __u64 addr = bpf_get_func_ip(ctx);
> > > +
> > > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
> > > +       return 0;
> > > +}
> > > +#endif
> > > --
> > > 2.31.1
> > >
> >
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-12 23:32       ` Andrii Nakryiko
@ 2021-07-13 14:15         ` Jiri Olsa
  0 siblings, 0 replies; 22+ messages in thread
From: Jiri Olsa @ 2021-07-13 14:15 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Mon, Jul 12, 2021 at 04:32:25PM -0700, Andrii Nakryiko wrote:
> On Sun, Jul 11, 2021 at 7:48 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > On Wed, Jul 07, 2021 at 05:18:49PM -0700, Andrii Nakryiko wrote:
> > > On Wed, Jul 7, 2021 at 2:54 PM Jiri Olsa <jolsa@redhat.com> wrote:
> > > >
> > > > Adding test for bpf_get_func_ip in kprobe+ofset probe.
> > >
> > > typo: offset
> > >
> > > > Because of the offset value it's arch specific, adding
> > > > it only for x86_64 architecture.
> > >
> > > I'm not following, you specified +0x5 offset explicitly, why is this
> > > arch-specific?
> >
> > I need some instruction offset != 0 in the traced function,
> > x86_64's fentry jump is 5 bytes, other archs will be different
> 
> Right, ok. I don't see an easy way to detect this offset, but the
> #ifdef __x86_64__ detection doesn't work because we are compiling with
> -target bpf. Please double-check that it actually worked in the first
> place.

ugh, right

> 
> I think a better way would be to have test6 defined unconditionally in
> BPF code, but then disable loading test6 program on anything but
> x86_64 platform at runtime with bpf_program__set_autoload(false).

great, I did not know about this function, will be easier

thanks,
jirka

> 
> >
> > >
> > > >
> > > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > > ---
> > > >  .../testing/selftests/bpf/progs/get_func_ip_test.c  | 13 +++++++++++++
> > > >  1 file changed, 13 insertions(+)
> > > >
> > > > diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > > index 8ca54390d2b1..e8a9428a0ea3 100644
> > > > --- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > > +++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
> > > > @@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
> > > >  extern const void bpf_fentry_test3 __ksym;
> > > >  extern const void bpf_fentry_test4 __ksym;
> > > >  extern const void bpf_modify_return_test __ksym;
> > > > +extern const void bpf_fentry_test6 __ksym;
> > > >
> > > >  __u64 test1_result = 0;
> > > >  SEC("fentry/bpf_fentry_test1")
> > > > @@ -60,3 +61,15 @@ int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
> > > >         test5_result = (const void *) addr == &bpf_modify_return_test;
> > > >         return ret;
> > > >  }
> > > > +
> > > > +#ifdef __x86_64__
> > > > +__u64 test6_result = 0;
> > >
> > > see, and you just forgot to update the user-space part of the test to
> > > even check test6_result...
> > >
> > > please group variables together and do explicit ASSERT_EQ
> >
> > right.. will change
> >
> > thanks,
> > jirka
> >
> > >
> > > > +SEC("kprobe/bpf_fentry_test6+0x5")
> > > > +int test6(struct pt_regs *ctx)
> > > > +{
> > > > +       __u64 addr = bpf_get_func_ip(ctx);
> > > > +
> > > > +       test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
> > > > +       return 0;
> > > > +}
> > > > +#endif
> > > > --
> > > > 2.31.1
> > > >
> > >
> >
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-07-13 14:15 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-07 21:47 [PATCHv3 bpf-next 0/7] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 1/7] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 2/7] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 3/7] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
2021-07-08  0:06   ` Andrii Nakryiko
2021-07-11 14:48     ` Jiri Olsa
2021-07-08  2:11   ` Alexei Starovoitov
2021-07-11 14:48     ` Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 4/7] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
2021-07-10  7:55   ` Masami Hiramatsu
2021-07-11 14:48     ` Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 5/7] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
2021-07-08  0:12   ` Andrii Nakryiko
2021-07-11 14:48     ` Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 6/7] libbpf: allow specification of "kprobe/function+offset" Jiri Olsa
2021-07-08  0:14   ` Andrii Nakryiko
2021-07-11 14:48     ` Jiri Olsa
2021-07-07 21:47 ` [PATCHv3 bpf-next 7/7] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
2021-07-08  0:18   ` Andrii Nakryiko
2021-07-11 14:48     ` Jiri Olsa
2021-07-12 23:32       ` Andrii Nakryiko
2021-07-13 14:15         ` Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).