LKML Archive on lore.kernel.org
help / color / mirror / Atom feed
* [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation
@ 2021-12-07  5:40 Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 1/7] Documentation: KUnit: Rewrite main page Harinder Singh
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

The KUnit documentation was not very organized. There was little
information related to KUnit architecture and the importance of unit
testing.

Add some new pages, expand and reorganize the existing documentation.
Reword pages to make information and style more consistent.


Changes since v1:
https://lore.kernel.org/linux-kselftest/20211203042437.740255-1-sharinder@google.com/

--Fixed spelling mistakes
--Restored paragraph about kunit_tool introduction
--Added note about CONFIG_KUNIT_ALL_TESTS (Thanks Tim Bird for review
comments)
-- Miscellaneous changes


Harinder Singh (7):
  Documentation: KUnit: Rewrite main page
  Documentation: KUnit: Rewrite getting started
  Documentation: KUnit: Added KUnit Architecture
  Documentation: kunit: Reorganize documentation related to running
    tests
  Documentation: KUnit: Rework writing page to focus on writing tests
  Documentation: KUnit: Restyle Test Style and Nomenclature page
  Documentation: KUnit: Restyled Frequently Asked Questions

 .../dev-tools/kunit/architecture.rst          | 206 +++++++
 Documentation/dev-tools/kunit/faq.rst         |  73 ++-
 Documentation/dev-tools/kunit/index.rst       | 172 +++---
 .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
 Documentation/dev-tools/kunit/run_manual.rst  |  57 ++
 Documentation/dev-tools/kunit/run_wrapper.rst | 247 ++++++++
 Documentation/dev-tools/kunit/start.rst       | 198 +++---
 Documentation/dev-tools/kunit/style.rst       | 101 ++--
 Documentation/dev-tools/kunit/usage.rst       | 570 ++++++++----------
 9 files changed, 1039 insertions(+), 585 deletions(-)
 create mode 100644 Documentation/dev-tools/kunit/architecture.rst
 create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
 create mode 100644 Documentation/dev-tools/kunit/run_manual.rst
 create mode 100644 Documentation/dev-tools/kunit/run_wrapper.rst


base-commit: 4c388a8e740d3235a194f330c8ef327deef710f6
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/7] Documentation: KUnit: Rewrite main page
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07 17:11   ` Tim.Bird
  2021-12-07  5:40 ` [PATCH v2 2/7] Documentation: KUnit: Rewrite getting started Harinder Singh
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Add a section on advantages of unit testing, how to write unit tests,
KUnit features and Prerequisites.

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/index.rst | 166 +++++++++++++-----------
 1 file changed, 88 insertions(+), 78 deletions(-)

diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index cacb35ec658d..ebf4bffaa1ca 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -1,11 +1,12 @@
 .. SPDX-License-Identifier: GPL-2.0
 
-=========================================
-KUnit - Unit Testing for the Linux Kernel
-=========================================
+=================================
+KUnit - Linux Kernel Unit Testing
+=================================
 
 .. toctree::
 	:maxdepth: 2
+	:caption: Contents:
 
 	start
 	usage
@@ -16,82 +17,91 @@ KUnit - Unit Testing for the Linux Kernel
 	tips
 	running_tips
 
-What is KUnit?
-==============
-
-KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
-
-KUnit is heavily inspired by JUnit, Python's unittest.mock, and
-Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
-cases, grouping related test cases into test suites, providing common
-infrastructure for running tests, and much more.
-
-KUnit consists of a kernel component, which provides a set of macros for easily
-writing unit tests. Tests written against KUnit will run on kernel boot if
-built-in, or when loaded if built as a module. These tests write out results to
-the kernel log in `TAP <https://testanything.org/>`_ format.
-
-To make running these tests (and reading the results) easier, KUnit offers
-:doc:`kunit_tool <kunit-tool>`, which builds a `User Mode Linux
-<http://user-mode-linux.sourceforge.net>`_ kernel, runs it, and parses the test
-results. This provides a quick way of running KUnit tests during development,
-without requiring a virtual machine or separate hardware.
-
-Get started now: Documentation/dev-tools/kunit/start.rst
-
-Why KUnit?
-==========
-
-A unit test is supposed to test a single unit of code in isolation, hence the
-name. A unit test should be the finest granularity of testing and as such should
-allow all possible code paths to be tested in the code under test; this is only
-possible if the code under test is very small and does not have any external
-dependencies outside of the test's control like hardware.
-
-KUnit provides a common framework for unit tests within the kernel.
-
-KUnit tests can be run on most architectures, and most tests are architecture
-independent. All built-in KUnit tests run on kernel startup.  Alternatively,
-KUnit and KUnit tests can be built as modules and tests will run when the test
-module is loaded.
-
-.. note::
-
-        KUnit can also run tests without needing a virtual machine or actual
-        hardware under User Mode Linux. User Mode Linux is a Linux architecture,
-        like ARM or x86, which compiles the kernel as a Linux executable. KUnit
-        can be used with UML either by building with ``ARCH=um`` (like any other
-        architecture), or by using :doc:`kunit_tool <kunit-tool>`.
-
-KUnit is fast. Excluding build time, from invocation to completion KUnit can run
-several dozen tests in only 10 to 20 seconds; this might not sound like a big
-deal to some people, but having such fast and easy to run tests fundamentally
-changes the way you go about testing and even writing code in the first place.
-Linus himself said in his `git talk at Google
-<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
-
-	"... a lot of people seem to think that performance is about doing the
-	same thing, just doing it faster, and that is not true. That is not what
-	performance is all about. If you can do something really fast, really
-	well, people will start using it differently."
-
-In this context Linus was talking about branching and merging,
-but this point also applies to testing. If your tests are slow, unreliable, are
-difficult to write, and require a special setup or special hardware to run,
-then you wait a lot longer to write tests, and you wait a lot longer to run
-tests; this means that tests are likely to break, unlikely to test a lot of
-things, and are unlikely to be rerun once they pass. If your tests are really
-fast, you run them all the time, every time you make a change, and every time
-someone sends you some code. Why trust that someone ran all their tests
-correctly on every change when you can just run them yourself in less time than
-it takes to read their test log?
+This section details the kernel unit testing framework.
+
+Introduction
+============
+
+KUnit (Kernel unit testing framework) provides a common framework for
+unit tests within the Linux kernel. Using KUnit, you can define groups
+of test cases called test suites. The tests either run on kernel boot
+if built-in, or load as a module. KUnit automatically flags and reports
+failed test cases in the kernel log. The test results appear in `TAP
+(Test Anything Protocol) format <https://testanything.org/>`_. It is inspired by
+JUnit, Python’s unittest.mock, and GoogleTest/GoogleMock (C++ unit testing
+framework).
+
+KUnit tests are part of the kernel, written in the C (programming)
+language, and test parts of the Kernel implementation (example: a C
+language function). Excluding build time, from invocation to
+completion, KUnit can run around 100 tests in less than 10 seconds.
+KUnit can test any kernel component, for example: file system, system
+calls, memory management, device drivers and so on.
+
+KUnit follows the white-box testing approach. The test has access to
+internal system functionality. KUnit runs in kernel space and is not
+restricted to things exposed to user-space.
+
+In addition, KUnit has kunit_tool, a script (``tools/testing/kunit/kunit.py``)
+that configures the Linux kernel, runs KUnit tests under QEMU or UML (`User Mode
+Linux <http://user-mode-linux.sourceforge.net/>`_), parses the test results and
+displays them in a user friendly manner.
+
+Features
+--------
+
+- Provides a framework for writing unit tests.
+- Runs tests on any kernel architecture.
+- Runs a test in milliseconds.
+
+Prerequisites
+-------------
+
+- Any Linux kernel compatible hardware.
+- For Kernel under test, Linux kernel version 5.5 or greater.
+
+Unit Testing
+============
+
+A unit test tests a single unit of code in isolation. A unit test is the finest
+granularity of testing and allows all possible code paths to be tested in the
+code under test. This is possible if the code under test is small and does not
+have any external dependencies outside of the test's control like hardware.
+
+
+Write Unit Tests
+----------------
+
+To write good unit tests, there is a simple but powerful pattern:
+Arrange-Act-Assert. This is a great way to structure test cases and
+defines an order of operations.
+
+- Arrange inputs and targets: At the start of the test, arrange the data
+  that allows a function to work. Example: initialize a statement or
+  object.
+- Act on the target behavior: Call your function/code under test.
+- Assert expected outcome: Verify the result (or resulting state) as expected
+  or not.
+
+Unit Testing Advantages
+-----------------------
+
+- Increases testing speed and development in the long run.
+- Detects bugs at initial stage and therefore decreases bug fix cost
+  compared to acceptance testing.
+- Improves code quality.
+- Encourages writing testable code.
 
 How do I use it?
 ================
 
-*   Documentation/dev-tools/kunit/start.rst - for new users of KUnit
-*   Documentation/dev-tools/kunit/tips.rst - for short examples of best practices
-*   Documentation/dev-tools/kunit/usage.rst - for a more detailed explanation of KUnit features
-*   Documentation/dev-tools/kunit/api/index.rst - for the list of KUnit APIs used for testing
-*   Documentation/dev-tools/kunit/kunit-tool.rst - for more information on the kunit_tool helper script
-*   Documentation/dev-tools/kunit/faq.rst - for answers to some common questions about KUnit
+*   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
+*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
+*   Documentation/dev-tools/kunit/tips.rst - best practices with
+    examples.
+*   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
+    used for testing.
+*   Documentation/dev-tools/kunit/kunit-tool.rst - kunit_tool helper
+    script.
+*   Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
+    answers.
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 2/7] Documentation: KUnit: Rewrite getting started
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 1/7] Documentation: KUnit: Rewrite main page Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture Harinder Singh
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Clarify the purpose of kunit_tool and fixed consistency issues

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/start.rst | 195 +++++++++++++-----------
 1 file changed, 102 insertions(+), 93 deletions(-)

diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index 1e00f9226f74..55f8df1abd40 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -4,132 +4,136 @@
 Getting Started
 ===============
 
-Installing dependencies
+Installing Dependencies
 =======================
-KUnit has the same dependencies as the Linux kernel. As long as you can build
-the kernel, you can run KUnit.
+KUnit has the same dependencies as the Linux kernel. As long as you can
+build the kernel, you can run KUnit.
 
-Running tests with the KUnit Wrapper
-====================================
-Included with KUnit is a simple Python wrapper which runs tests under User Mode
-Linux, and formats the test results.
-
-The wrapper can be run with:
+Running tests with kunit_tool
+=============================
+kunit_tool is a Python script, which configures and builds a kernel, runs
+tests, and formats the test results. From the kernel repository, you
+can run kunit_tool:
 
 .. code-block:: bash
 
 	./tools/testing/kunit/kunit.py run
 
-For more information on this wrapper (also called kunit_tool) check out the
-Documentation/dev-tools/kunit/kunit-tool.rst page.
+For more information on this wrapper, see:
+Documentation/dev-tools/kunit/kunit-tool.rst.
+
+Creating a ``.kunitconfig``
+---------------------------
+
+By default, kunit_tool runs a selection of tests. However, you can specify which
+unit tests to run by creating a ``.kunitconfig`` file with kernel config options
+that enable only a specific set of tests and their dependencies.
+The ``.kunitconfig`` file contains a list of kconfig options which are required
+to run the desired targets. The ``.kunitconfig`` also contains any other test
+specific config options, such as test dependencies. For example: the
+``FAT_FS`` tests - ``FAT_KUNIT_TEST``, depends on
+``FAT_FS``. ``FAT_FS`` can be enabled by selecting either ``MSDOS_FS``
+or ``VFAT_FS``. To run ``FAT_KUNIT_TEST``, the ``.kunitconfig`` has:
 
-Creating a .kunitconfig
------------------------
-If you want to run a specific set of tests (rather than those listed in the
-KUnit defconfig), you can provide Kconfig options in the ``.kunitconfig`` file.
-This file essentially contains the regular Kernel config, with the specific
-test targets as well. The ``.kunitconfig`` should also contain any other config
-options required by the tests.
+.. code-block:: none
+
+	CONFIG_KUNIT=y
+	CONFIG_MSDOS_FS=y
+	CONFIG_FAT_KUNIT_TEST=y
 
-A good starting point for a ``.kunitconfig`` is the KUnit defconfig:
+1. A good starting point for the ``.kunitconfig``, is the KUnit default
+   config. Run the command:
 
 .. code-block:: bash
 
 	cd $PATH_TO_LINUX_REPO
 	cp tools/testing/kunit/configs/default.config .kunitconfig
 
-You can then add any other Kconfig options you wish, e.g.:
+.. note ::
+   You may want to remove CONFIG_KUNIT_ALL_TESTS from the ``.kunitconfig`` as
+   it will enable a number of additional tests that you may not want.
+
+2. You can then add any other Kconfig options, for example:
 
 .. code-block:: none
 
 	CONFIG_LIST_KUNIT_TEST=y
 
-:doc:`kunit_tool <kunit-tool>` will ensure that all config options set in
-``.kunitconfig`` are set in the kernel ``.config`` before running the tests.
-It'll warn you if you haven't included the dependencies of the options you're
-using.
+Before running the tests, kunit_tool ensures that all config options
+set in ``.kunitconfig`` are set in the kernel ``.config``. It will warn
+you if you have not included dependencies for the options used.
 
-.. note::
-   Note that removing something from the ``.kunitconfig`` will not trigger a
-   rebuild of the ``.config`` file: the configuration is only updated if the
-   ``.kunitconfig`` is not a subset of ``.config``. This means that you can use
-   other tools (such as make menuconfig) to adjust other config options.
+.. note ::
+   The configuration is only updated if the ``.kunitconfig`` is not a
+   subset of ``.config``. You can use tools (for example:
+   make menuconfig) to adjust other config options.
 
-
-Running the tests (KUnit Wrapper)
----------------------------------
-
-To make sure that everything is set up correctly, simply invoke the Python
-wrapper from your kernel repo:
+Running Tests (KUnit Wrapper)
+-----------------------------
+1. To make sure that everything is set up correctly, invoke the Python
+   wrapper from your kernel repository:
 
 .. code-block:: bash
 
 	./tools/testing/kunit/kunit.py run
 
-.. note::
-   You may want to run ``make mrproper`` first.
-
 If everything worked correctly, you should see the following:
 
-.. code-block:: bash
+.. code-block::
 
 	Generating .config ...
 	Building KUnit Kernel ...
 	Starting KUnit Kernel ...
 
-followed by a list of tests that are run. All of them should be passing.
+The tests will pass or fail.
 
-.. note::
-	Because it is building a lot of sources for the first time, the
-	``Building KUnit kernel`` step may take a while.
+.. note ::
+   Because it is building a lot of sources for the first time, the
+   ``Building KUnit kernel`` may take a while.
 
-Running tests without the KUnit Wrapper
+Running Tests without the KUnit Wrapper
 =======================================
-
-If you'd rather not use the KUnit Wrapper (if, for example, you need to
-integrate with other systems, or use an architecture other than UML), KUnit can
-be included in any kernel, and the results read out and parsed manually.
-
-.. note::
-   KUnit is not designed for use in a production system, and it's possible that
-   tests may reduce the stability or security of the system.
-
-
-
-Configuring the kernel
+If you do not want to use the KUnit Wrapper (for example: you want code
+under test to integrate with other systems, or use a different/
+unsupported architecture or configuration), KUnit can be included in
+any kernel, and the results are read out and parsed manually.
+
+.. note ::
+   ``CONFIG_KUNIT`` should not be enabled in a production environment.
+   Enabling KUnit disables Kernel Address-Space Layout Randomization
+   (KASLR), and tests may affect the state of the kernel in ways not
+   suitable for production.
+
+Configuring the Kernel
 ----------------------
+To enable KUnit itself, you need to enable the ``CONFIG_KUNIT`` Kconfig
+option (under Kernel Hacking/Kernel Testing and Coverage in
+``menuconfig``). From there, you can enable any KUnit tests. They
+usually have config options ending in ``_KUNIT_TEST``.
 
-In order to enable KUnit itself, you simply need to enable the ``CONFIG_KUNIT``
-Kconfig option (it's under Kernel Hacking/Kernel Testing and Coverage in
-menuconfig). From there, you can enable any KUnit tests you want: they usually
-have config options ending in ``_KUNIT_TEST``.
-
-KUnit and KUnit tests can be compiled as modules: in this case the tests in a
-module will be run when the module is loaded.
+KUnit and KUnit tests can be compiled as modules. The tests in a module
+will run when the module is loaded.
 
-
-Running the tests (w/o KUnit Wrapper)
+Running Tests (without KUnit Wrapper)
 -------------------------------------
+Build and run your kernel. In the kernel log, the test output is printed
+out in the TAP format. This will only happen by default if KUnit/tests
+are built-in. Otherwise the module will need to be loaded.
 
-Build and run your kernel as usual. Test output will be written to the kernel
-log in `TAP <https://testanything.org/>`_ format.
-
-.. note::
-   It's possible that there will be other lines and/or data interspersed in the
-   TAP output.
-
+.. note ::
+   Some lines and/or data may get interspersed in the TAP output.
 
-Writing your first test
+Writing Your First Test
 =======================
+In your kernel repository, let's add some code that we can test.
 
-In your kernel repo let's add some code that we can test. Create a file
-``drivers/misc/example.h`` with the contents:
+1. Create a file ``drivers/misc/example.h``, which includes:
 
 .. code-block:: c
 
 	int misc_example_add(int left, int right);
 
-create a file ``drivers/misc/example.c``:
+2. Create a file ``drivers/misc/example.c``, which includes:
 
 .. code-block:: c
 
@@ -142,21 +146,22 @@ create a file ``drivers/misc/example.c``:
 		return left + right;
 	}
 
-Now add the following lines to ``drivers/misc/Kconfig``:
+3. Add the following lines to ``drivers/misc/Kconfig``:
 
 .. code-block:: kconfig
 
 	config MISC_EXAMPLE
 		bool "My example"
 
-and the following lines to ``drivers/misc/Makefile``:
+4. Add the following lines to ``drivers/misc/Makefile``:
 
 .. code-block:: make
 
 	obj-$(CONFIG_MISC_EXAMPLE) += example.o
 
-Now we are ready to write the test. The test will be in
-``drivers/misc/example-test.c``:
+Now we are ready to write the test cases.
+
+1. Add the below test case in ``drivers/misc/example_test.c``:
 
 .. code-block:: c
 
@@ -191,7 +196,7 @@ Now we are ready to write the test. The test will be in
 	};
 	kunit_test_suite(misc_example_test_suite);
 
-Now add the following to ``drivers/misc/Kconfig``:
+2. Add the following lines to ``drivers/misc/Kconfig``:
 
 .. code-block:: kconfig
 
@@ -200,20 +205,20 @@ Now add the following to ``drivers/misc/Kconfig``:
 		depends on MISC_EXAMPLE && KUNIT=y
 		default KUNIT_ALL_TESTS
 
-and the following to ``drivers/misc/Makefile``:
+3. Add the following lines to ``drivers/misc/Makefile``:
 
 .. code-block:: make
 
-	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example-test.o
+	obj-$(CONFIG_MISC_EXAMPLE_TEST) += example_test.o
 
-Now add it to your ``.kunitconfig``:
+4. Add the following lines to ``.kunitconfig``:
 
 .. code-block:: none
 
 	CONFIG_MISC_EXAMPLE=y
 	CONFIG_MISC_EXAMPLE_TEST=y
 
-Now you can run the test:
+5. Run the test:
 
 .. code-block:: bash
 
@@ -227,16 +232,20 @@ You should see the following failure:
 	[16:08:57] [PASSED] misc-example:misc_example_add_test_basic
 	[16:08:57] [FAILED] misc-example:misc_example_test_failure
 	[16:08:57] EXPECTATION FAILED at drivers/misc/example-test.c:17
-	[16:08:57] 	This test never passes.
+	[16:08:57]      This test never passes.
 	...
 
-Congrats! You just wrote your first KUnit test!
+Congrats! You just wrote your first KUnit test.
 
 Next Steps
 ==========
-*   Check out the Documentation/dev-tools/kunit/tips.rst page for tips on
-    writing idiomatic KUnit tests.
-*   Check out the :doc:`running_tips` page for tips on
-    how to make running KUnit tests easier.
-*   Optional: see the :doc:`usage` page for a more
-    in-depth explanation of KUnit.
+
+*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
+*   Documentation/dev-tools/kunit/tips.rst - best practices with
+    examples.
+*   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
+    used for testing.
+*   Documentation/dev-tools/kunit/kunit-tool.rst - kunit_tool helper
+    script.
+*   Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
+    answers.
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 1/7] Documentation: KUnit: Rewrite main page Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 2/7] Documentation: KUnit: Rewrite getting started Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07 17:24   ` Tim.Bird
  2021-12-10 23:08   ` Marco Elver
  2021-12-07  5:40 ` [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests Harinder Singh
                   ` (3 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Describe the components of KUnit and how the kernel mode parts
interact with kunit_tool.

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 .../dev-tools/kunit/architecture.rst          | 206 ++++++++++++++++++
 Documentation/dev-tools/kunit/index.rst       |   2 +
 .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
 Documentation/dev-tools/kunit/start.rst       |   1 +
 4 files changed, 209 insertions(+)
 create mode 100644 Documentation/dev-tools/kunit/architecture.rst
 create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png

diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
new file mode 100644
index 000000000000..bb0fb3e3ed01
--- /dev/null
+++ b/Documentation/dev-tools/kunit/architecture.rst
@@ -0,0 +1,206 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================
+KUnit Architecture
+==================
+
+The KUnit architecture can be divided into two parts:
+
+- Kernel testing library
+- kunit_tool (Command line test harness)
+
+In-Kernel Testing Framework
+===========================
+
+The kernel testing library supports KUnit tests written in C using
+KUnit. KUnit tests are kernel code. KUnit does several things:
+
+- Organizes tests
+- Reports test results
+- Provides test utilities
+
+Test Cases
+----------
+
+The fundamental unit in KUnit is the test case. The KUnit test cases are
+grouped into KUnit suites. A KUnit test case is a function with type
+signature ``void (*)(struct kunit *test)``.
+These test case functions are wrapped in a struct called
+``struct kunit_case``. For code, see:
+https://elixir.bootlin.com/linux/latest/source/include/kunit/test.h#L145
+
+It includes:
+
+- ``run_case``: the function implementing the actual test case.
+- ``name``: the test case name.
+- ``generate_params``: the parameterized tests generator function. This
+  is optional for non-parameterized tests.
+
+Each KUnit test case gets a ``struct kunit`` context
+object passed to it that tracks a running test. The KUnit assertion
+macros and other KUnit utilities use the ``struct kunit`` context
+object. As an exception, there are two fields:
+
+- ``->priv``: The setup functions can use it to store arbitrary test
+  user data.
+
+- ``->param_value``: It contains the parameter value which can be
+  retrieved in the parameterized tests.
+
+Test Suites
+-----------
+
+A KUnit suite includes a collection of test cases. The KUnit suites
+are represented by the ``struct kunit_suite``. For example:
+
+.. code-block:: c
+
+	static struct kunit_case example_test_cases[] = {
+		KUNIT_CASE(example_test_foo),
+		KUNIT_CASE(example_test_bar),
+		KUNIT_CASE(example_test_baz),
+		{}
+	};
+
+	static struct kunit_suite example_test_suite = {
+		.name = "example",
+		.init = example_test_init,
+		.exit = example_test_exit,
+		.test_cases = example_test_cases,
+	};
+	kunit_test_suite(example_test_suite);
+
+In the above example, the test suite ``example_test_suite``, runs the
+test cases ``example_test_foo``, ``example_test_bar``, and
+``example_test_baz``. Before running the test, the ``example_test_init``
+is called and after running the test, ``example_test_exit`` is called.
+The ``kunit_test_suite(example_test_suite)`` registers the test suite
+with the KUnit test framework.
+
+Executor
+--------
+
+The KUnit executor can list and run built-in KUnit tests on boot.
+The Test suites are stored in a linker section
+called ``.kunit_test_suites``. For code, see:
+https://elixir.bootlin.com/linux/v5.12/source/include/asm-generic/vmlinux.lds.h#L918.
+The linker section consists of an array of pointers to
+``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
+macro. To run all tests compiled into the kernel, the KUnit executor
+iterates over the linker section array.
+
+.. kernel-figure:: kunit_suitememorydiagram.png
+	:alt:	KUnit Suite Memory
+
+	KUnit Suite Memory Diagram
+
+On the kernel boot, the KUnit executor uses the start and end addresses
+of this section to iterate over and run all tests. For code, see:
+https://elixir.bootlin.com/linux/latest/source/lib/kunit/executor.c
+
+When built as a module, the ``kunit_test_suites()`` macro defines a
+``module_init()`` function, which runs all the tests in the compilation
+unit instead of utilizing the executor.
+
+In KUnit tests, some error classes do not affect other tests
+or parts of the kernel, each KUnit case executes in a separate thread
+context. For code, see:
+https://elixir.bootlin.com/linux/latest/source/lib/kunit/try-catch.c#L58
+
+Assertion Macros
+----------------
+
+KUnit tests verify state using expectations/assertions.
+All expectations/assertions are formatted as:
+``KUNIT_{EXPECT|ASSERT}_<op>[_MSG](kunit, property[, message])``
+
+- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
+  expectation.
+
+	- For an expectation, if the check fails, marks the test as failed
+	  and logs the failure.
+
+	- An assertion, on failure, causes the test case to terminate
+	  immediately.
+
+		- Assertions call function:
+		  ``void __noreturn kunit_abort(struct kunit *)``.
+
+		- ``kunit_abort`` calls function:
+		  ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
+
+		- ``kunit_try_catch_throw`` calls function:
+		  ``void complete_and_exit(struct completion *, long) __noreturn;``
+		  and terminates the special thread context.
+
+- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
+  has the boolean value “true”), ``EQ`` (two supplied properties are
+  equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
+  contain an “err” value).
+
+- ``[_MSG]`` prints a custom message on failure.
+
+Test Result Reporting
+---------------------
+KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
+https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
+KTAP (yet to be standardized format) works with KUnit and Kselftest.
+The KUnit executor prints KTAP results to dmesg, and debugfs
+(if configured).
+
+Parameterized Tests
+-------------------
+
+Each KUnit parameterized test is associated with a collection of
+parameters. The test is invoked multiple times, once for each parameter
+value and the parameter is stored in the ``param_value`` field.
+The test case includes a ``KUNIT_CASE_PARAM()`` macro that accepts a
+generator function.
+The generator function returns the next parameter given to the
+previous parameter in parameterized tests. It also provides a macro to
+generate common-case generators based on arrays.
+
+For code, see:
+https://elixir.bootlin.com/linux/v5.12/source/include/kunit/test.h#L1783
+
+
+
+
+kunit_tool (Command Line Test Harness)
+======================================
+
+kunit_tool is a Python script ``(tools/testing/kunit/kunit.py)``
+that can be used to configure, build, exec, parse and run (runs other
+commands in order) test results. You can either run KUnit tests using
+kunit_tool or can include KUnit in kernel and parse manually.
+
+- ``configure`` command generates the kernel ``.config`` from a
+  ``.kunitconfig`` file (and any architecture-specific options).
+  For some architectures, additional config options are specified in the
+  ``qemu_config`` Python script
+  (For example: ``tools/testing/kunit/qemu_configs/powerpc.py``).
+  It parses both the existing ``.config`` and the ``.kunitconfig`` files
+  and ensures that ``.config`` is a superset of ``.kunitconfig``.
+  If this is not the case, it will combine the two and run
+  ``make olddefconfig`` to regenerate the ``.config`` file. It then
+  verifies that ``.config`` is now a superset. This checks if all
+  Kconfig dependencies are correctly specified in ``.kunitconfig``.
+  ``kunit_config.py`` includes the parsing Kconfigs code. The code which
+  runs ``make olddefconfig`` is a part of ``kunit_kernel.py``. You can
+  invoke this command via: ``./tools/testing/kunit/kunit.py config`` and
+  generate a ``.config`` file.
+- ``build`` runs ``make`` on the kernel tree with required options
+  (depends on the architecture and some options, for example: build_dir)
+  and reports any errors.
+  To build a KUnit kernel from the current ``.config``, you can use the
+  ``build`` argument: ``./tools/testing/kunit/kunit.py build``.
+- ``exec`` command executes kernel results either directly (using
+  User-mode Linux configuration), or via an emulator such
+  as QEMU. It reads results from the log via standard
+  output (stdout), and passes them to ``parse`` to be parsed.
+  If you already have built a kernel with built-in KUnit tests,
+  you can run the kernel and display the test results with the ``exec``
+  argument: ``./tools/testing/kunit/kunit.py exec``.
+- ``parse`` extracts the KTAP output from a kernel log, parses
+  the test results, and prints a summary. For failed tests, any
+  diagnostic output will be included.
diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index ebf4bffaa1ca..75e4ae85adbb 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -9,6 +9,7 @@ KUnit - Linux Kernel Unit Testing
 	:caption: Contents:
 
 	start
+	architecture
 	usage
 	kunit-tool
 	api/index
@@ -96,6 +97,7 @@ How do I use it?
 ================
 
 *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
+*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
 *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
diff --git a/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png b/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
new file mode 100644
index 0000000000000000000000000000000000000000..a1aa7c3b0f63edfea83eb1cef3e2257b47b5ca7b
GIT binary patch
literal 24174
zcmd43cT`m0vNhTS0f_=48AJt<jDR2@D4-}ArO82Z&N*Ws3#dp=0+N~}IR^<MAR;0(
zIR_P*C?Jx)YJTUQd+s;h81IeuzCRuVk>2dRd+oJqRn3|;tHae)<jIH`h!F?`*@OFX
z4-p7lCj<hgiHHD>#6>A;z<+0*?(4WB5F}LCe>mSJ0}c_0i--qucQm}lRwo1P^!EJu
z5BKOp>~Gs=;83s#d)%?ui?1j9nk}4eM@sR4Fq1$(A!}VE%17|d)mM(+!eTVI9oB_7
z9?F(t+B96WCsaH*ZfPV>aJoN#H0>5IpqR?~So2&gZiu$>!!(?qpNmMs$Id!Ev6OcS
zb?{1GmJED3?MQy@;ky8}OF4Ta0(MCYH$$gobsPhmTh8~E-<E@~;`y2peC#izPS~Fv
z2>Jhh@&EdeM89Lj*)ZK{Hf3M@RwlhHW4{{GdHWl9l!C341Gx?2xBBjlm|cx#wY+>z
zebAVeixR%@n6N+L1nk#8FaCdeXux4_+zvM@ZN6i3-VXoLB?7TaHk9jl8)ftiW6g4M
z%CcoVk;T}_{VpR$5FG|zIHy19X0X5i>&1T_vSKg6lb3UUbbgT%5u@Srxx8rIVaYWt
z%qlK0rjDG6vY4&apSIN#h0kv;lp!l8H&jLv_uqG72sfPxv*MkaYFj=>sZPTLw}Q7e
z83|VS-`De>#r+>k{LhQ;TeUCw2gsf753=A0bj4fIP4l*UrS#p#8b#$#L$A%dH`+KP
zP7b%>aZQ3QXMG$Hj*7Z1S4rYTdb(==9~a7=dM#L?^33_WH_tq7!IZ9XA9K|Z2H8f1
zvi|zBl^|`-O^vk+b&T558K-<6w_9euJzCX29`52j4!V3gp7NKwFM0DP31_yI-IDe;
zRZi^)GyL9zxb=g{eOKcZE-S1p!^3J=K0e?ej;jidc!F`Yqq&5=+@_?}LpsP1F=b+9
z`uH+-J+nB-?XTkaIfphY)2<6+&2d15$iav|q&iz-1LIFTLygcuoKOT6HqM`JoMNVl
zkTO@4eG6Ym<zD0y40Uh$an?6#3hS}UjR*gnabcpF{9^Qy?v!b<F~(o-af6M{a93l+
z>7XP8YrSzAUkvMP$^NON`#weJ)K^|4yv~7ri~$ECM}9mn+}5cJbqh*@?|+ioonF(x
zTRTZHql`$uaMs=$VY*7ijve?|@oX_kYm8dGpkKxQ`Bpsy6{-E{>7<T9(My=?M~GIv
zc=Jy0no~QZn@W~w^e4V(p)(xgm3Mk5zpj1!dXOkjuXt|*m=DZaAM4aHPHPQMZ}OtS
zrPImjZyP^C1dKeN#fS$HUFAG?IzI{z5GR?VZ7JqDMvKL4m$ERw_3UtM?nPwzk@VG2
z`iIO$g@wTe;0modz&Z4eH5fKP<avAk^Pc7FI3t5*=--x1+xJf2ljgi=-nv71Z~YgM
zseewXcf2tx;%s^5biq*X#CD76ukyY7_YF9c9akCK87V16m|-jRy`xX~K{s!osp)=y
zu(?-r#NUCQU|j-MahOxZ@Q(?+yL_s*0`Hnc;Kuqfv(#Q)e;K>S6Qh&A=s$Tz$5aTq
z85nA{QvTV}pVU(qY}{4-k5;}4N0F70(1kqN?w9Tl8DWR5og7?qw)9%>xXkIGhx~Kk
zd=^KNQ|m*%<?Inn`Tb4%$v+mwrEiCOI(;_0du3;>^3={R7Di;@k5XZ+Qo`Vo<_u5o
zUy>;XT~n_}oWwIuNcA_;#*ydLvz_QEZ3uiv@GJ4!#`kbY5o#k0*h?Bd(2l|}(y?Pn
zu%wg6v2f~pa0^z?&fncH!Srh3q)uL2x4(Enwzi>xt&35SL-1Ur$U?pAzGNyN?YAUS
zrNQUuWm&$A>2K9~!lYjQ&F)RrH7%^M^<NzMP!Tj@V{5E_dlfYKozm^ocpH|T{6;wE
z%6NKBdNpn^#q>%CPPE=$bQ62oYhj)^{%eB+cjBI*Z^!&BVF=3POmy}5*&8@jPF@GD
zdR`5cgYDz*N_rKB-<7ogbcvJXXl4oPJV!)8NklN~R;Pe6%EA?mS2B5s7A#m1;r6TZ
zGi2%g>C%l#YwFQFt~%<^B3|0rUohRBk7AGA(r-FC+FHl^a8YTnXq3;i`<-7DdbvHz
z_+nc3<BJz!(QN7HsR8vJe=mQjeQmysuG>7CF7{YEuJZ5Tc*`)0IfGd4M8})os)%hL
z&oX|@`Kom!N6|@~$^BNK07K_o8!lp|uY9d_>OTIg8WDWQQd1YQtSB3VS|@)puSiTv
zZhKc^$`OY;7#nnT-SRB$y8LVquCmC@6%d!=i@6!Az;;VCv@`fjVs7}r<|Z=6KA9tg
z>MUv3JX~&t7AEa5+3=C1i|#UK7f9$&kaUE1eOHp8@e|j;Ab8oL0)_&A9AFIJNQvSy
zq2*FH=dFtx4xjJxUasb7Y$BQ&uWl)aHO?4!x^}`EbTU|8ByGF!z@?6<ZF2I*%FjCu
zxkEb)rCoTAWh7r$Htq5yC=s7?*x_yIUfffxzlZi?8k7@{{Qys`hNQuuB7&x4Qj$oc
z-Q95z_hKKUk_Ve@6n0gayk%-|E0~+#<ujlhhVduno<$;Y5b_fa0j}zkVAW&vwdZ?>
ztJed0``r-73HE<>T5!Fp2OOmm`m+rNH8lpRdmanx=70YwJ|-a!tBqzR>dRqA<QU|{
z)|0zhpT%imeKx=ehKT#&BAj8Pas`G1lP#c3jaNnoHX1?|9??%TiGl@RNuRiV<vp%^
zrO*UkP-)l68{V?+fb$1W?2J;-h~FX~7LQsul~S!ZOS_(xMbge<R(W3NkOXTIWQ%`{
zuovziyO*skOp2WWWe&kVs};7*1n$#Ph}O^FH9Bof9Irfu#m=?P+Doz&cynLFuVoP<
z^r+|tIXe}dB8T(mbtB3tm|CJx87J{HSbv#|pU5PzQ5<pTi&3<DUd$GMmXhsDIghRU
zo>e|7qB))A1xGBLUOw)~5%(00i8lzdMXLT_2<v=rMVFYG_%gHir(2JJhid?fVN9My
z@K<4FF=q0c<PI(-xJ%!t+EgW<+}5I9^Av;8eA=$G)$Lp9UEJRI>j!p0NgllMwpJ<>
zFEWIUJD|?5`D@54qVN6(zjTvx1~L<M85`8X!-hX`{UHcy;W>_bSbbq4`aF5?%id)Y
zgBJnPZJFnM4h{xjhwP1rHC$$)uy|OIY@Ayv_R{F{41N#{9UfrTOaBXQ@yDM|Zw&@v
z;a+_)TX@KUVP+0a%23H$&E2V=bP4(OdDX7F{OVBjExM?lwZW<505EVhBr+PutzMt#
z&r$Hkk8PSSE1S2GLnwm}CNG#QGarf&Cva!kA0fDUL~#z^pXJBk^_H{P#Cb@*vyMR|
z9vpw?K2)<WcO=XpKw$AlLyzU3OF_qT31@rD)9thRgLm2@`DOIvYqN$;=?K0^pGbmN
zuW=negGD-(IIyL&IU~-@&?D6Essp;sDt0+C?KT^_vp1`jZ*j&(C~VU>PK-HPq>sS<
zxszv&zHLeB4{>&g|Cel<&Uvb+zp;!kIxfiKY$@Ye@#EHSpsQ?2Yh+RFref^N9Ts<&
z8-D^gE9^$7|NC#-5m|><Z=5|vt0ICB>^=Q%q(8!WTaFCNS3ez&H``hZS425t#uKNp
z`vP0mJ>xUcR<w*a199*sml<3dx#trEU5;gBOAZ{4$)l_5Hm437YAyNZm6jh&^i5mG
zpWDkd*D2LWol}(~>mX*Lv`=>Bm$W3&X|Wmc-jyJq`HS(UUrJ_gq@=W`(VPh*LI@P9
zf`=`VipsWMd)?K6zP!vovl@QYdAB20Y;1|IARwH0B5t!caVSr)P!(OYG1)i$s!M!q
zDIO`|8&`a2dyvuZmX41|{V92kwSslM=a2c6{wcOfTdv+@Gd%jhnchOz70~shKK(?a
zl2*FOy@_rkSAiw4a#_$VI69l#jj5gbvdGi<^J__pwE?~?{6_v{6@P`N9!%yRj2X3T
z#vdNNJrKBO5{&pWsE6L%8bWWxZnBzcW-KXlWHARD<jXZoT-`S^p80$;^_Mr3{TQBm
z(-K@kUQa=Lyn6G+_(%R5Ze+cRy>u{%sz=sF570k!CxsHQGv{?|!sPqC$?6enYx}WE
zqv#1=y-b(q;?dEf^zmisg+??M=1TNU9x^kzv3@L)3YPa}8Q<ELrr}*B<gYgvuOxI@
z3$`qipR6glNQB^*qHd2gf8tu>gBjA=XS<3;ylL2>n9$DuIG&G%Ipo$vt<gMFzbrfP
zXh3S>hE&I={RAiT!|Ulh1QOn4OAq_zS0`&|5gYK7ze+gP%qFN~t(NEL#4fe5xbv&*
zNJewdOUE1GwMrB)!m5{AE9qMQKrtV7ybJSPDknAAyGi~@V7Efan5unMHP<%P_AESI
zu(Q*$QQIw7de3c7r`+u;hRQFtIH<9y5=PP>FM1vfX(}a<mf}ESfJQu;HY0r2HZ(d*
z`QTe}jOSN>y7hMs-&2GCTJ_H(f=Zdixh(6`VVdE-VtZ|_wbujc%<kFM9Yy0=4YIV3
zQmf82?Rz@&9j8i8C4Gsl?cY2zs;nqi`G`~B_I33RUOwk^hvhhhNGt_UuTMU5Z1ZLA
zE!@Xz2aifOWUIQHnf(|Lrzu}G0i_5@^pne3AwG`BlPF{@ynze-xl<IE#o_$bUz?8%
z&{N7Y?AAI3h}i%BGdKDlNz*?s{x1%BkUQkutrV&aSl$bul|KCFUERTPkT!i_a5(OG
zJalHp9tLund;YDNKICSRbV(#D>ps>zvAuzNnp()3;mpLyo+R-OBq9>b70;a}pvB=?
zoQ8-PgcD9}=^tB)ApZDUJ6Ge)gGWqGljX`=nV9oI(l|3`HulAhc2la-mtW@U*x6?l
zyY~v5homqojb<VQCsjw*YW#iS;GmsX7O%_PpG`8>XYhde-cR*tgVyo>^bnA5``SIo
zN{hiBz1cQigYcthopNtHL?ga+8cAE&Qel%~_4)R3G|pj-d*g3BM4+k8FhcLu=|olY
zHF~2dr=)NN>;T4(JpE^EJBA;}>Xicl8a}weII#AyNiE!??A11{z*qkgwyuWj0rcP?
zgcs`9m)2*jQW)3!S|WXyu77b$@0dJHi?=<QtF5V0(*8pZnEG_y#KB1D(&?^+6d8%E
zN**v1Ivwpw!qznPwytAu*BcNW-k4a4H$U<gYM~z#p557ph$D(gj10jk?bq5Yd!kJ`
zySAn*k#OW?+I@w<q%5TJ5no2=;G&Fp?5J7_06V98eX<4>>{h$*f5MerAbw~muG-VJ
zihb6)YvT>I_t$Ax{QepsURj)wCMhbWFign>jdQhcig8@!Kp<Ehw+`OBiZcO519Zk{
zQ(;?h-sdtU1hqrL7qBoM+2bEWy><w*v91Ol|Ci74qXEOi-@`j5ttnjIhXd(dUt!mk
ze0A1&Rmj4W=hUy?wllu%wg1Z&+{LEKuX~f|<G)cg3+sfFp@LN0_^laJ_0b8X%^?{t
zPUtFp9`QXqxN7UQW~39p<6NQdB_8_(z5Hx&@#)|q;7z=sqBq_EXqh$}A<_c0eE~>V
zY`0W-P!7=jLgG&)0=%-sBJE;}hiG$l+lOeybjK5)hiC}&Pv8RY5;qRKnF`_w?d#zW
z47%Qxfd60I^2UF9$D2o;tU_zFr=|L&uCcEt#?rBr=E<nAb4&Y&z*g92R-%8eoN$w-
z-VA+oD<)gn_iYIp>p}1A8Lz7E&AnX~aA0g1`6|N0&uhAHcxKhp4+rhoKW`kkTp#V)
zum90^dwmXO=Dqx6<P8jt<m&nxWFyDCLAxt;pWCc#IDR7wh07=pLAE+zGr)a6>*{1$
zQ&OIx=W!5#L-cP%mMJFK5WK(Je%r@Eg{8t^PykYo2&714$dgPtT+;-;4?6O11(m5O
zY#87^xB=23R5^X)jNuoVh}oEkyCd{P9Dj)`4a)JQbbkJRp2MA4s6gY$xVETI4xz8F
z4fyZ;y{<0v+Z<RB@Du#!So2|P1lO3RH`iEb$)hfal=I+2=df{f{EyS*opT67KMsxt
zHOn1qo|InYp{35XL7$rLzi*RJj=Nul<CU^RsqXNl1fx(i8J?flOIN(m53k`L1_10x
z?yBrvbRqKe%r4C%#3XC(?d;tnzV%IBGM&tGFj-NNF09k+E9v+BbnY{~ZXC~=@8hFT
zS%AMb4xB5rJGs55_x&<ocM&sT-MtFf5M={h*VZ?ep^}fjicw^@Rqq~{(Q*Yv%nw8O
z6^W_vG%^-BHLsPL<;7Cn$=dOORa-BI>QO#Jje3CdVg`KA8ybw0mp;mT6>q?UtZ+<(
z%+cXdoQ(hjug0$kgNi4QohF}r)#vj2h?#7Nt>-(L`m(Z(omGXaSdYM035yCwG2ini
zW)Y3pQ29l$megy@5Ua^hZ$l_8cx^7dj;FzC-IeL3@v3E#%--@pk*wfr&WhBFSKIuv
z={kvNKL5nA8+x$~;6PN9F>`6;gdh0~WMIXdB%|-PYio&S+)B(x)t7|h8O9Kr((<?I
z>!N(f7|9ZLo{J3x6C<)|J{AwTlKpD-M+7AmTw5sD*)M!3>Hr24O??Ht@K|TP3Ue2P
zek?3t+4al<q=Jp`XmaMeDZ4o$tm_cju_)yx$qv{m4#G&nP2-E(t)XqP-F#o?wcuq4
z>@M9eMqFUqf6cuWRB<)E18dm&hc~?&h0?A%XUvm$agCvSa7uo1IT>cCMh{WZT+AEP
z5h?;Bw(Z<Bj_oWnZ4+!^6)HIIvM*`z@EXj%PB%*93752kk!lR7O4f=yWD3^FTdB6>
ziSEx!z+FChd@k3zhfhCm@LKyw-@Z2@Ux!UtnHq)aadyhJzLe14zOr3Rmwi>Ha&UI!
zw@%A<@mgxU0X`2H??T2N4E$)<uiJFDt1|-=sD;g<0T*n($9NT<5g2f<zoc=&vRqTV
z`AE0d$Oz-5ywI$9JOR!wSkS05DQ#Bme)NG2S{VQ7hX`agqyrul)wa6biB(RV&r`Oy
zT{M>-3rBb;b3ZRL(ZbkJ(?eDPCiPB;9!&kYEZgthz22_jv10Fz`8$q*(vUk9s+j$V
zl>0h99kUxvzJGk+<MS4Uvdb$oqXKL`w~|IK{~#s&236NKB*ri9iJrf1ZTS$W2JWxB
z{s@}I6fD`4u=~VZhoVYBg5_JKJmRx|;IFvchU&eg>0;gB@pZ9_2{_i8)8tZQh-STz
z&0UUwHS`w6I<QuMBMAZniXi;H9i}28uqBYRQ32w)i??0vh6^bm=lo2FVhI|(VY&~*
zV+-mpqz=x$><zLD9n5^W>ab|i+&}?4Zs5Yh^PRn#R?H?%4Hj31QQ4E>v2<Z!)x()0
zQYTy~FR;wkukEubLCJT7_~sO`AsEXy{_`VBC|;?@edCwNxkzLh=>V!x%BWj~X?D$_
z?inyJ+&c9+P1&+iOKx=n7J92*JV|c?S&0aw$2p4HT*Gmq^DkfrnH5v#RU%9mWdbQ;
z_Wb;(MHg*-?c3mKYHFid)JE1m3-`Td=IAi2{Hc_?F*Y|{h>FNeFnoQ`<5H>xdjJ?s
zW`_X*NanOOf?W0+4}J4`**$NZL$Z`{I(t`jHD4_lu{Y*P>HuovA(~tBdk?qzE~d$)
zF11J(;!a(p3|~g6ZqwFZzb%nV=ep;IYLCmz%HM6Vqe6E5^)SoSM&r|3-s8x^5J<J}
z_xE)h2t@B6mA|gGqEkawnax_vE*(lXuJwE9CZ#4~`4*af<7u4dFl(r~PL7K<yw?-W
zf%r(<3ibaJeidwEqSOt7*wg!b8S>x>z}hn9vebALas>V-yW;%&-X)j)XaxyjrC;I~
z$nbvF-6Xx2?UHuGIepfPDUyT)AvJDjE1bk*mloFtVBnx2!nX8+9hX5MFhgpWo`7jf
zag-?I_n8K?=wHeK`U~&?FgY*3m<c?Z2vDk={Wr5?h;FW_^J77wE^t}bF(Hb6aL{Cq
z>+8P0^9X-Y;hU?g{(3Y{w3U0jdCn-YTyeXwGHi!bzLLlrHwB-7TjF6LfO(%N#-`Nd
z!3w+bn3bP62>kkbv;1?wx-4vY(|@w5i&tQq*=vWyTa@b6<kPtc!x;HBZ>}$wpT-i|
z0##skf;R26??Dx~9c7U`v~%WQsR+G15cPA%Yhl<TOTi*J*KV-+zMSS)h=dLn*G1aL
z)h@A>t}KV(0NpXF?DKg&;e$XzFIz%y_Zl*<UEO(IOE;WkpQ`w`I*H-7KVoCJ^W;kd
zx^*{)gWRdJ^^uf?0{#WcEX2&-9;UD^)T(v83zkAOaxw>nG7(@Xu5&7}POfd~spUNo
zc|#pn;l}Z3q+@D+)F}lXUSBcC?5?v&T#0g}wKp1LgSJSvFs2w)yHC)u)su-r-KUj>
zg%;B!ZM<zvF{ZAf3uL}-o+i_Y2P@)mI^$S%>J~w*!Gp;f3WOP2-&+8}JZH%BKto+!
z5v8+DuZVmuzmnnfUZN*urxL+SMWJld7cN>jSKBg#%sV$0tgzG6?u67hd5VH_N>#ZE
zi;Cr-hH`}-J)-F<jRVRPFozYoGI>a0;8-OT@1*g=UJqo6GSz2F_x%xGfA2cjG>t~2
zjJEX0TWuk<R(<>)p@Dq?PUieaKVF_s4t3q(XJJ`6+zmX|kwOJ4TOwJu1BO+Fnja*V
z_DtRUnEYQ6flUH^Rqk7vtj`yETPuMn??$YGeJ$?Qk8gMGUQ*k`=%J}NvXlu;1eH{j
zTJ?2h!2B1g;zMh03xs{J($Jh*#TokP^l8(Ub$PqoR!d{zFV=N(!Q!So^q(LD4mw|f
z?o!t9C7IyFqnjR^CTuU7HVKpl-#lBuO!^LiGxnNvxQXX#;h{<J@*lb<B9oJenC4^s
zzTXcFc0F|~e1>SlKUIB|LsZ4I>rn`j_pfYcf=h~S(pz!v^mSPO!TCAFU7TAJEmyOw
zc1gW_N>4oYn$m;->AokhP%(_;!A|1mU&qpY&dEUyNqmvv&C!nU)O?FM_!raD&9_3K
zYk!a>o4iXDRJX3$)J8GH!L2!i%@8q|KcDsDbL>7Wchy5#Fh$vRZfD=AK3bqrXF2h^
zyPsXk%Wpj@Ow!SMA!V8C_TUQ@{5zVihKKzUdUIdcEc9&I$25PX`938Ac{|&W7!Rm^
zjbk!78dy9ss3?bkiABFRjWkzVMHN4*(JRB1*Z39q{_JUrY2{wRQ6I0$+xz#d<JnWn
zzSu#P+JMB<=;CRWL)<SXyd-!sPmWiC)JGLPveXSeHfo{p7GM1Meaq|YBs1h4ChlZo
zdw>00T802+*9HD~S5i7MdRx-6=p#A)?z~;RO;_r7cQJ#KkckZ4y|;~hv}AaO8IH2c
ztco1c<IUU|q3GqpIz~b~CUu&KtW=w<-hC@kkBsQlmY^qVu4Ff-fUp6!Ba;p7FB(WF
zKR)5*cx*5sl;8}gX>DnJBgKZ6`q`8%FRm>gFOrP>Qf*k%9WbRA(J=|n>u0?h=Qwt`
zWj;-$>KnE<?<Q-{_cd1`Vd;4O>Y3h*jw&ImL3S0n2WmDD#*O(2l?*gXgbdCGq31vR
z_#+0j5^-khEL-o$8q@hNO&d%hSMNozt!R-+^apg|=(pU8dA8QBZK!MEmHnEy|6NIT
zQ#RJp*-3hEU&b+&_Sa<TS|)P<x^}LwHHg=?y8)~)Kh>*D6^)rjDOP|4!Dk!{N?!M*
z=C=knG`#9vhU!E&v7XVDhvn0GN2yQ);A2^Jt1}zQKNN%1M$*w&zAGX^C#SU2r0x_N
zX*O2du$6+Dt_|8asve-{`r)+30B8M!ZFrzXdxvBW0u#Y~A4)IG`eXp<!7bW#R|s%s
z&Z>IOF&%T=xz;_Vjvnr@IME*09b@&o^5V_OR(DeX^8X*E(E{=?GP%}svE`2q{!0sR
zM_HCH<t}(raL?SPb9-AhX@79|JX9bBs?#S2Ry@EkhxkqJqQL$~Ml9hh9;;JkeYnu^
zeBz#h#t3deOA!w*!#%c_=Svrr4fs<<vDwX=3EjeP&Or3PTV8r!`(CYrw?nD~BH*XR
zZ8~PwQm@Ynr8E)>-y1rS{}EG7?rJUCU=6d0g#z5B&Ijs)Z$7tJyvZr$EFWI`{O)17
zvfRb>C^-mO@NKs9{ge|XzEk71s(UbTgAr<N<1*@R4hAIWrwykud-MOI3PX%oKY*Du
zYISXgUcL8qcahL(w<GWa5N~-|{#%>CzQxki0->pOSxjX%z3w?ADUe{Dvs9G~2eSeO
zDt~=x(u)K#__-ZMN8^_HAkr7WI5K-~x3sk;xsFsg@&m!QA63&ErsndlHp7zlq_I+?
z9218^iJ-+Ln2A69bV88Jvk@)0i1cRzoETQH`{f{AT37iTp!KtX9}!hwSI0z(oz@t>
zZEwT0=gDTadE4SkH9){fRas7MiyE=K@pUK|V8{EY_(2`Ct!-7E7r%*U2teQgGb*8q
ziY=z>?kg&xG9v3~gR`7Jb?rrpS_eF=<D1l^#xO%Qsu66<8vXO`p}FuxVn3u_-1=q6
z^qc9U{X8vWeaK>t`3u<PN74LPCKnQ8eY7*APuTq}Ikrs)iN^qB8lgvN<q`6g41>Ze
z#BwQjCTmLPZW3k>$mHaYEYcGZh)NPDX|xV+PiY|ICbRooVi2F!9v?W6omN)zQ}NmA
z<F$Mpi-EqV?(OENi=LefxD0ONEvA>^v1Z0tJn)|wS#-fXmY}GTmB&)Wwf26sYyVK7
zu?G1--^TZu2Xv_$7@HS9ji{^C*h$UWx*>%8lg?BuDB)i9)|7&4#oH08Ho-V2$P(j1
zOP7dW=6or<`|POVrdMIBGWKZUQTHiQ^cRFYud4bRV?>7IQ^oxPa~<h1FqeNgRd6F8
ziwwPGxBn}X#d1rA&3_pX8ca1F{rV{H)TV=7Tm<$|$#i-`OGgKB)=+?#biOrNN}RQH
zhIKM5Atsr?(b-YedFqL_5_vQ2Bx-l9%O+Fyd1aCYG3>u&d!BSOCIp4@s?d%i#(&1H
znDm|ay8o?P5M=;;>D+9*DNTXcN=$L5usLHmV3?&SnbzI<(}m{a*$aj7S+@0!k#&bq
zn`GH@RPT%fuMr`n>_m2gm^qm#<{Zgh>nup`ayCEg+OkD>U3JIX`)uAATVGZ(KUJ8$
z#m;dTytX;*ntwFj^%qAxkA57d9(B7#x$@G~y8#{Ka?Cy<L{jkx82FCDaP=uOuIHI^
zGds5hYE1~tyc5TGOn4F#U)?NNf%0LcErSL!7(M?A{Y|UrN;sbk<y$W1yalHJsO^-Q
z@JvHozb@pyF=}bc@~B$TStl-OROhs`!(`gMP>&AR!?0*bNFyK-g`-e2Gj}5K^fVP<
zh+Cgt@#xiQ1Os=`T%n916e@U=8qCG<^0{ngd|EIHBKx}4SUp8*RwMaa-xPjR`rDt%
zi$>GFm4pgoC=ssH89wI9rPw-$r>m!cq~KFv6`MVNaGr*bA5V)uxd#&)qykBl(3pEj
zw=3NeTmJ-0RlhyhK+(K!*q89yOiEn6VGktq#>!>vycS9ETINL90{QsCAGQk=NW6=)
z#*va1(uT#`=q=35AY!J$oX*kl+g>ZSt~9m$A9y10t(x}6&b*)D@r)yWX@BSk%M9?l
z3CMR(w2LxR+8-5wu@LBO?oh_aPsQGoa6S}sp#K=sTth!&#oZ44EPY7L(pjetL>EG<
zM_3sMvlj;S8V@_icw+0Lp=b(B1=s!o#D2+qYZ2P*oG7T%5lL_21KKo0S3q*NUr&R}
zlt7tTOzPODmT~++g~YLM4Vd-9k~1~a&WO_K8HS$klD~As%`SKNm@W3;F^fbG&T2cy
z=#~AgTSIta<qI))+8dj6PR1RxHg;tZzBQZIJ<Jx1X|OyTFpPFYm$EE~oDj|1iv({H
zJR6LT=cVV2ju(v(kLB^j^cL+U+Q$tLMwVWuG6cjXt&tgtL;1*d|NMH5mFp}okz=Xp
z!BrkJL~1q9ifKSSsE+bxu3!2dHglthv*+qK(^NdpVUD?uVTkA@)H^>@kA|(jnO<H7
zf<~zfeLXv#Bz^aR5g)+PE=eS1#!a;<2X)UOH#ECdqp;dF$^)2yCSgLm_!hc%WxJbQ
z1(ik{G~s^-_A3?$kqDt(Dd=Xb^3K#Ozi%Shu|T-##Jk6P28W?2Qxee)4;m&ROKZOW
z?tOGhe<)B70KBZ$e`xRR09WBs;eNo9rV^U{p0VwZ*Mr;i%&ZA3b;!Z6CMnvQ_X(W0
zM6X&dX-cf$<_Q_tbfzBzEWY&ZTZrdSzB{~U{*dhWQAAeb#FY=2%LrB4LI>+9=B!Cm
z&R3iT_re9tlQ|$?)1MduA%cRAcc&HStC#&c_MuPQI4;vZp<$CfOdB~y9j0bQyy;yA
znw6~|3jf_b(fO^wWrIsm`c+@_2_QaHkj>9nJw(sCH!P}P5z>~v$<@%U`LHzNj!PdR
zL)5p9d&gxZrvpAi<N<OBsBt9iNnCH)`3%$2cX5H3o&9rbYa!_6*vRd8X~(MV2$2Ig
z?eC>mZRE648GcP8y;a|i5Ep6LJ?izZBZgfm8l|RlwM4N7EHnN-i_;o$f<MDJ&NLx)
z#_2;j4{goIOJiw}oVzeyd;5MUc0`8ju!Gv=<TK=`O27U_oP!LTCcsKn>9Mo1@sMki
z4yo$y?d2UWZ212}DLq0<dqF@*Bp8GZGBri*Pd-?khhn|bV8>TQeq%8fex1M~NL_oQ
z!Xi>fnr^$XMMuZXf0&bN10M{IWCBk^W0{M&OIR6}5197C5^sxBaRlTkJbXMC6Z&bV
z+B@)5$q7T|H<9Qf#<0$>%^~g>&^B;Pd*MOq2J4?#W`Bm@@z8d1ZaCyrFaT+O&7U3R
z7a{lex|NRFlWkNT;J_q@zz#3*eVLwO`{QYUl@qfVO0?1&hY4@y%#8Kx-oO9SL4{g{
z{(YmY##*+%2pCqB>H`oUpvM3?NkV_G`!H~#pa8*gxb^i5D~=Y|p|V{rv6DD$RMpFu
z&_ADO-x*Na_2?U!Z8FEV&Q2dB@@@m_(#+nx1;=IC4}Eiq#arkSD5THK`!$>KBO3rY
z0MbadO&+QkzwHyi0&&TW6Ca7R4bg1?sPT1e)NZo$CNnV&Dx6$C(^ev(@Sv({cufxz
z5LSK8w`*88Ms?zuMJq8AlFY7?Ykc~AgS=zb*SqzTvluj@B>R2txGbq-WxIbHN50Y9
zLdEF2Ldw)QEk>FPt3$cd(*PenF6}NCRg*lkPA=3cjbJPtZxFfaZg6}H!L;&AmlA*F
zEZ6RQr1BbGfHaWuH($0iWt+Ai_(beFm$+7VQs$<7u-TJEA|Y4(#AoMAdjQyx9%LDy
zWjY^p1Hp%&pb0$Dk2SC$xII>0gfk_03AK<o&ELg&<}e^1^;hSIpz>R~d&hNuo{g6n
z3@V_c3dw@BknTgHVgF)p3s85Qe&i$o=`Hx?{^Q&)#HmElbARge*D3^^uo*ukXlkY`
z(_c+ytemyqefzq18O<~&x9BRIROCapD==<VwH6AYaSyBN(Y~%DqKzhACw2NB_7a4H
zM6LdvMkRCWMt0v6k41v^aNgm(aK833c^A<t&Cv2#QTV-o&7F8<Bk6&tdMi3H15)z2
zb&#O6{khTJ3bs+2m$y&_>X+xS-E9km%6jfpP^NfgT8ug)DY$0yKa?9Vbx@yySPjAA
z`PA`oo6AL-j}b@|70Y0yjOg23n)vIRUIIWE997ve0192G8fwcWx*@u}JiFtiFjYY(
z{l}t=Tl^ep8>IexA9Z`JVSn8G!lS<$(iE@Iy@uK0>na;YC*@wVe5qN_ng>a~zZC2!
zkes;aXA%_;t>NJ5=QxsPP=K)^38f--v^zpY&XJmIMTy!R^BaTE>RjZ|YJ!n4DNwaF
z3sY9ew((Fz^LN~Yzv_aOP5mVUBCa^CF(%Ug%*uybH;%?&2MEL~#lP52_M0ag^(XMt
zEH2j?g#r%NQ0ASg_`Y9u5cRQjqag2^2Vx<iHzI7FpMd}}z(u<uY}ma>$3q`Oh>(|L
zfj6bKh`I1=Im`GH&Q8~10R1J5=`XoI=kD|dq;8>Jqfj7pS^TWxTcw0V0-slYX{f~7
zrTaDTC#x1kyA`@f<jr9Ls6^~S(eS!i$@0{&c<DtBCJ4`9nHo*3h=q4lf&+JyxC-3C
z9kMpE^j9mJsqyscqDd&Mlil5Ye8yXJ?j|bs_u1<PrF?e>xz0G%OoW7>-+H{UpC>im
z!~vrD+w+s*0JS24Mv9rPqVi(UGD{H5l(tCk5Hp&_`b+!~MjYnpg*e;CHnX;bw@tO~
zh_%q1Bw5D?_UnZc$=PCZ^-BR<gm9_TZ+Uo_SlHN1&VRQu-Fy5Hz3D8)-(na^-4!z*
zZ_LS9m!6x_adFs>a%CXwPVnN&kFYA69<<LjEqj9ueW@5H&%KB$qi;z{Yzs8~IOm7M
zC~arMY;6SlzLRZJ5>uahGk4{AEND9Er?d-3?_wI_ZKlTU{w`t3;_uZQ+Sw#DzzM$_
z*FqwH@BDrYyUR<c`gV@5lThpj%5j(LdYBsB%mJ6tBA&(gz=}7eJ0>+_^-^<;{0Tsq
z0yC*YPQ&fCytV&Y!~|9vLPUItTvK~MZ6IZ9WanrxKkZeua~O`D=)40F+IJ3y;CO#e
zOvIB~d5Sa9^bawHf7joA%HT->NJ$sX$I`&j8eYl$k+6teY!tntj;wlj5-7%!V0x}2
zXi%ZTMD8W^{Iw2u;lTha-XD>O=&bjM_a&S?x9+NcBXz4(oX}d{p0a#IsV*YnbHQf+
z_wv@A*sQFp)uZ|G?*@}dOK7zI@EI}J-}NI5=9*wRAAc4n*;Z*{SxwvM3*HPP1pA1#
z)j?sg8_^!=J`HO0z^kK}n_fEZYuFsT$OcIelaMG=jFv%?cv|--od+s;k2Ep5;pRH5
zdhzbV<wZiB&XyE9@i_=F+!;LthnhCiUN#4@n><b<VkRSEsLuGV`r2|04+gQrH<?{g
z5N6dySgT-=EI+Z<cI?!NtOEt+f{RO|9$$u$x(@YZfr<MXRQVtSDHeNTT=h5s@E3o?
zhqc1aCb>A1`~$`F7y6Ifsr6BW!3Gy!%#GBQZtErJzRKd<ImX+f>}s)3FEeu)Ji)AC
z)=O&#Z|{d}Fq%uJTPJV)^(*ANK0%twr`6rRU}KLgz1#kh`a(fBZtBRei9u;q@y=_Y
zx8s!pZU_xkm&9lv+=bc<yzUt-z!rW=sSOHL5=d}bTHX`A?GJ^5&CT?ZP7c8$k&&aj
zWhTp2JRpTo6a4D`zAvGgFt}%e8#KOIoIL_9!3CXOD}GaYcBno8Vca#pEQ9Dmc|Tf<
zx_ZGTgJMz}gVW@s%VJnm^^nT3_xKhL9~fDc8?fg;qU02$Di>Mjwz>d2FQA)V-S7Tu
z4?m4>Sl{5zAUuCTs`BK>Ns`YpTb!Bv6f5+}Ad&mxf+X(F#+gf(o=Wh1J$gs?g9Uek
z5W)9WEp1#SgK$IN8Mk9))iB3GYl3XiidpsOjo!!N82{4V+X6&`!X(_3KYWbkgVr3N
zdH@o{g>Pze(p$Ud9v)YDm55o?HZWpos35&4>trJQXInQ-l6aDqM7>9gwp_;(dZ_Nc
zwo$agdAZCPNQM|y$ztX@(+%W+Y3GsUxCmL(t4|TtJ&W7H2FyKEmisp~^~MS{N2t5_
z108p_BYbr*jd4<sOvHw4N+~lkiiyT1KBo^eEJ;=K*tVAQl=6PIUmmIJ$N6zD(>Lyp
zLlk7~|59l*{OoG@Z<*Mqs+fPcLPXHN1Z5SXd%f;b{2x`9#`@kp3}zE@b%jb%kaaSU
zK>W|cu!`3EC2t**bM+Z)hLnFQKU%w1U;92tU&?5N#l{2|i>B*OxK5H-p&b3PpfIuQ
ziFpMVvi+ahO5Gsb2$}E|mzesXktL>)U^5vg|MFDLhFms4ku?3jAZcvhtR+ocv}dm`
z{|76w*Njuy+e?EGx7x(IRJTKgtj_KN?5%WFCdKpi9q~3pU$|v-eFB;{4(BWtDj!s#
z!{)wf@|1jTQnQvdCT~RqJ(;Vm6ykiLHBRmuac51ME8_Kgo|Vu>Sv|*=c?6aaFfLJ_
zB|`v9naV46j2bO;41D(2B0yC;e&=$)>tRRXq>i6z`AaI|LmVY>___PufFcGvn<RVD
z)WtMcE|6uHd}SL+=%Xt26i-{R3AHVF!-RXKnjBB9922W8(`(49D@<A^TIki+x4$oA
z#l8Dxhh|U6K==hsp+wkAed|zSN`fv(1Xq_)D8Pp<-Tcmzh~cW{G_hu{m^t0mS=v8#
zffHN1T+*c@1WJuvgQwNfy6v`mC2=%jHn{?O*(S&}qH52gi2%fjvih67NCQocJggii
z-HS@=BY$cB=h2o6fh>?z0|yLhzK_5QbEkU(3ygq#xI3Aq8CHL>46EL;pKqr@R&SMd
zW|Y1#0;OA&j=!$d?wpXIx8%V}4O{JGDW?L%j(eJ64bKXxbodH2KrK*kmXz;CN0X7I
z(mp}7))!t=2{mm;zkg-h($eEj!A=z%@upvEOxforRVsPt#0SoaNwPs<<75jqrLlTa
z4Cc|E2+dW(u?F_5?#!TH)j;yv2^$r;+=L>xvfj!j_ay&76n8YQ%!<u2jlssQL88*&
z#2Z`lpll0Lm!EUmb$K`wlOGlLiGy_RsyBBwJj?){($`%7voX7pn&|EIrL|Lzj0V#V
z+z9X}iY$RgAN98OfLALd=+dIjJ{7Qm;t(W*bT^Goc5lOlA)v$oS)!?z2|mo=yd~7@
z7Yj>g^J;*R`L<Q<R(j#;C|!Ojm;YpWQR&vBT0s-i-%#rt6R|gt*1%-Sxs{e!N)AG`
z3tI<H*XMN-ehQ+FtByt0&*KMYu_a8%UxJ-yJ00#-X<wr@{m@xjwI=g+ZJZFnKHuXy
zx@wuy?kl-fHZ>>LUZwQ-To-?fkqJ-MpMx!C8dRFrs;};Gv*^4o{Y-uVw+rG}6d^tN
zv3EqT=;IOD`lnwY?VC;e9`)LxTUX^S@tyUs7HWjk!MgOsR}o;9p6K{9^c%gHhX=C%
zAqLVv5au92$oE6soe5Nac1Wx33!dlq@q=?5ZR}I3NM|HS9T~53Fov-<E>4;aq3+HR
z?khQ$?b=qe=QKKq0(+>acIN3FfMsaOTvEl%bHF0&EIvdcS&Uw^=sLO=$D0O7gKD8p
zDSmK~%k5`L`SW;4NKNY&@MZEI@pZFLh2|?1FaEu|*t4?vNkJ2YT_DK^Zu{!BFjq<K
z)#~$m@6+FNNAs_tA)AMQlBlRy&g}>3;%K`^)i%ra!Dg=g3M9Zaq)!VCxbc`U&(H0Z
znd@9tA319x!Hhi5-g}n35Q_XkI{rSr{S6>InGL7Zxh4TSp)h>Fopn9EPU9Q9gv)VT
z#IMJAPsgFL!VG6Z&N#BRAN|N3@2k1aaK`pvjWgtOhBbO$gj18!;!HE6y(AMK<S1BB
z5(8lhe}d#Zc(kBe&>#!fG$TnIbmGBiny!N48ze|15)e537*OPixz`d(xmcw&75B?^
zs>VR+KC`or6*fR3ib<~iMWt)9_2dT!8TbJS1;K(3Kp%uN?X2P1I&wR!LX066GE7qe
zObaT1_GN4V4q}=teNeqcX2hoTrN>bQjYN6vZ0lt|q;`BH|I`^5`AkjSm#eNJD#}W!
z8?&u;xqvd!8Ll_<x<7~&H5_W0#&)I$vIIKPO#=xwJ^5f=5gJGEHk$04Uj!mHSUu;4
z_5YKZYe;E%2AUN?VCFzj3rf3LAlTy$k3i9dm9c{G(*(B6-pyvUt;D0c-L8%dknUap
z#c@$4gIEN(8LV(u;2_p|qHprUMh8()+eo#!D?Lbmp-^>4ilhF1KNO3wjtlfAD{I6h
zi{l^ia9TZk1*Qxaw~wTLrM!Dj`k3W*W736Ri!IA?DP1eu#sH3of_&%xI8R(BHNU(J
zQt4{BvL1o5zJyJv1sfH0Acx(5iNz<xLv_Y<y}1bylwnRBOJNaMe$2YI_c<q-ZObY}
zPxEA~k`RG7U^Z$^_xE*%?E+dxX6#e-YbKGhq3mEpaD(pfb0EQndPJ=9b|8JgH!GnL
zv<vr1l>znDYMH9NMkEq~)aO~B!dS)+=5kitvJ>=o)mcy#4+lP-+pEbBl-)FFmoh8@
z716Sx7dcjD4O(e%HKqGr{|JOGa(fPOTwnEMY6WdDu>5KdAB_LQ4}(;h7X;SfAcFlP
zyJyU!^@IG@4=MfpFYX!Kn$*&7GK*m&!_F41-~Yz>!n3fdzhFdpAwP=~jU(q_ayOBo
zzgh01LwUTpPPJ@9uKj4!s2c4r68`2->siXXzhVNO2mW2;_td9F9k$ZM?<U?ge)_><
zOugcGcsy94YG^Q6cX$6-Jg79Q+1Y6iaX9Kxs`rUcBdfTI-XNb5)tVM~>^*fx7kbye
zfxMlDIC9M)FB`C3B+0Qu6x1V;=%*5(0$o9Eh^#C#2HT`+-&<hR_UHJ;a^6D0H-b7K
zER2D1=0-A;wblW7P-*5a6-m_YGWJ#t?-^8;WJb(Sbm-O2o0GN*+fXq6NzUR7Y+sZM
z%QsBzxIMo43}cE&;;4Od3-P+~^7l{uKF-D~EAEm<@bsk>Ls(64@@Syr>lHEGiLSW(
zS9jJ5^Bp7W90FLvv47ep98*R3ja^A_jrV+>{KNG|Yc95kYh%$@{_>}m_Rd@0KdOol
z_i_Tt#BiRT-tF6gvKwzI&i%gM85dM6E@IEtm3GLoz1hRekRL9!ZFH^Vnmo4CsxL>3
zac7XD{vGE!vHb_j<81K^7L!P9)0;@4YVr@=5&OG(nNv+cEGCcsIcUK^xOdk>Po4TG
zFBqhwSZz>;1Gi13ue;Q`>mkJ%O5`^5kB(W4ax|oC7W+uze~LmE2x1O)Z}=@yWO~Ps
zv%z34{WOd<P%R{=RYlCKY)UU)wTV6A+{GJhK9r<_w^n!ko0`R$7a?)CQa|WmZ!E!!
zka47W{4qsJdKlL%omf^}^!u~<4{D*e7al<);{e;_3%`BW)!w;yY?*bJ47ytst{@Q5
zL3Ox&4*+m#$<9f|<75s<m|gY{>tmoM3*A|ugkcCMsRHE>FqwLD+v<)I@rA#2k{{%N
zoB}ea<WE{7_}{Nc^>b+Zj`*lyy`d9;KoF!t!w93eh1hu+zJ}wo+gRomFK%!at64(N
zTmPxEGQVB?Tc`epgx+?5Kr5xa;98`@cOL*_YEN7mUZtnVLW>f#XIa>wOupK#|G0gT
z?m4JoPg}j-NDfa8;N<lNgMJJ$m0asK@UqJ|w`&{pG&7*+3q_56(7>7_&b8zT(L-^^
z+tAq$)J<>tYbvy&J2F78)TP~9hh+w&SfSAfI%0qjVLL|}Ul~sH46@|~vC|TBiH|hg
z4P=ts0cp?!Q-`r<A8QRrw|VkC-~~i6?-xU3U3_rWVN>lo_m@YD6>rirx}GI}gjqII
z0hNKSqIfKHW8FaxwNZG+5Q5ar#xuz}8HgdB(O;N#es4Q57LYfAS`>jmA}K-X$s6~&
zpK>|ryE~L#K}N-v&<A8<FJH~?iPXc%l}%`H<Hr9IeCD%~vay?=en<?22C0^KAPGP!
z3js(rTyEB9*okj59n`RpaldBs`gcp)oGT~;yl%08J`#KRfPg}Dg{ExybIB<HIY3lY
zgS3Nfycy(40dwII>$fH*qXs*F<%@;f($}(KZ7{9&J&O<(qK7h2&z-W>3bT`bT&xw&
zZjs6HE-i<!HB<r@1X5~H=xQBi@<50~1VMTW9ZFXa_=m?d{n8IfZL>_~-NcC5E}w4$
ztOZ!o59-`;MAtwlqXEomE{LDzml(RtaBdr^s?LfxfH+Efr7k+INgmqS;2Vhqf^0z$
z`tHh$$3&Ey`=g+(3#c~89?jI6Z%XrzSB&}N1pQO$n!1mFJ2RyLct%{E;@z{MGH@=~
z0$_h2qvg5q%M~)%4+1Y#lc1kmM!le>57SiwM%Fa*uV*{xz_$qB6J4pQKZ9jLO$0&C
zG?AK-FpJ=HJMz~jWn47A6M}+fHqL_G@R#pLZ*1#BVKBRM?PHY2l$jo|epvBctP0G?
zM?S;n*2&tCEX3a9nxu7~ZvIw^{I)Huhd{6pVLun3YUzkOFaWdwpI3_xNxzTZkQRMG
zGn)W@Bg`sF?y;_-7-bwS61gbGjYK{{a^Qrq{)JUrbvQq34-z9lE?p5VTzx?_nWh#t
z#27Lb=_;M&>&nzm9;<0iL1Mj<>~Bq>r4lOZF|P7U98{W6n6NY2)V_r?6yWC`wZ5&b
zMSBk@O>cYS#`F`O-!z4R<JexDIA}4?4}K?<<Q^qYHS})4?4d{Ku*<uVDyzWbE@&ae
zQuNMXPB%U5?zVDzC`Sky!4yDE=TW?b_4hhJkk}*(GM#y6VAtG{^?))B-hgiV_hF;o
zXD9`2dS7yrNK;{QZV-gR$$?46Z4=rbzd!@)LWOOx91!60cv8YKvv)b6|8iRQcLR0m
z`I)gAXIwn6I`>r>b{jVI!vc|hFdQ1-=WkY2rlVs1Wb(^A$7+!(G}<R8+2>2-zLMZS
zgPH;Gi6l60v$F*vzy0JXSU@HR>y}of+VXahhROg4gFXZ<{5vs((xu;&d4MH5U(fC@
z1#mb4e-&heKb7u%@c=*vxgE3yxX6GcpG&DeF;QD^`14Pv$3Xo$UO@k#re+-;ytV4y
zutL6$02dgbca@(g!d&O@OWwbQQ2(^N=~`Ug{2x-N1CI@B#Kz?H7d18WLXJA|<!3Tj
zSAPBeT~c2PWyS%s@-_O5N};6OvyPTmLI1VzJgnI16fC_U^LMPk#kTu5MC-4s4ps{{
z+FwvHxtQgCus?l7%6d5IOT+4Wn3OmCyU{1+ZY2nHzoL30&q?EfI<?Z&f5UTqb}L8A
zHl)NSU2DAzWKpp4|0P?M`>qYs47ciGWyat5o-0{8WlC8bH^O#@r=W};e5UieTInU5
zjxk#(|F2Bh-{E;D0eTh2skWY<F2+F0%w161F<gPQgdSwRl*#W0=hxQa^)yCBqpo6Q
z*=7{S1FUl&r2n=ndmA=&UD7R#(Hhk;ZUU-7p5grwvJ+8LdBrnAyA(PWQaaPfK{F(_
zv?n_BS+Oi}?cUQhhju=VOP9tdJ7g*#2!Oqpe3vd*fzlf|Y-v65tZY1$1jUlyjuhgH
zMv==ArXl~98Koh5+FyGq?@PU#^PBU!lo*a4nSQ3dX~eU-va?Stq4LKs_^x+kKSUkQ
zn~=%qIU7KB05o-Cu1>k#N1qZTSd`0?$7OLkIlC2s_R_!<=}B=+W!K8bt+6AZ6w0Xk
zBN4lS^TWZ9LyiGesCIC8P$mOq6;v_m^g9+h@PlL$7C_^!e?zZ~TOA6b&<_Dx#S63-
zh#Bi?JqgZ@Tf}LV!8K67gucv$^3m5rM}tW(CIoLwUH~8c8ZDJ?4N0g=3liZ{HVYx5
zNTmSVG{M~<2;v>YOFb>*udycZik`PeepE%c^78Stu51<&1;&M$sz3$D_X;a_vwYc#
zOpfo_^PWyKbG=y2a%BbV*Wk6<x;kXd3*zr(>*hWIhHE$|6c>!l#z9sjR^bj3Nx!Nx
z{`5PR4>`wmY+!Wd(%xB`j+Ksp!l98p3~Z`e9a3}HmQaw3_}~UR1Xw46w^90qNA6hh
z&7OL9m4<>tLJrENuWx3hDdAKs&C4~j2gSF$5Hv5+$@$8MPMq^^k_)!tr6z=4^?JvQ
z!!g65>&g}HMQzG#)!fbuR|p>3U}eU*q6I7Njy=t<Y^z#So_Ff5gU?97S3D48@aG4J
z5*C)M;dtdz!z6?Zk2@t-D@UJo>vEhEOH+Y7ls&UEpfPBgzd@?AG>Ux85iJa@xF9JF
zrZ03Ss68F=_3z01zoX-WKb_wA*-PKMyvC0GxC`}I>Svpv=L12$+M{dtN5a2?ANwGz
ztk11kj%Z!+fHGO5PSG*2p0CRc<D!4MH{je>xkOi`%po@C0sZ5zhpq^|ZJ8pTu<6Ij
zMR}6sTGx%-^`|Wi3KK8lBET5H#{wNOw0l0T4Olihw5Boc{absl0ZxWlg+i@oqka--
zNhpCNLOQ$B%nUUpP)AQpHqE*JJE3(kb@i=fguZW!hvZTRcC+pV27vlBKktIugkB00
zH^0XBOUd^7iJ<yB#Q~(SX;i}UX=>%fKqTXW+ZG|9S@fdV+&UKK0(4H#3r)T<Kx4Rt
z@9B<O{5-0#?qB8Gt&^Xj{(r03rwPcvF<SFX6)y<tER9Xf8Q#6+Z+QP4YHQ`?c~|I-
zKzkBY>fbp|<Tv2fOoEFs=Hu-i{jdWXHt^Dp`@sRK|5k$XEhxgLoV1;yRX>BFM$DTp
zW?v6J3zaig8w{E)mKcJ9#a;E=$WQEih3RS+7DX=Yc!4tReWt1K;2I^?1JdiaZ2PuE
zvIfjc>Y2nIeOW)^JI08f{KDP1q4x}n+pQiQLH{5P1<TFPAtPpig(o76Pq4@H*~$?c
zU707J<E@l}ddu0v@{{0W8o(P3aM`al(|4%gw87^NI2TZ5-FtlIG?D{|I0YX%L}9k6
z$`2~li=d}(AQ~FNT{g9#MXp=`16yKk#k%x02nayq@G^$1!vsOR5*0oV0#X_p$+Gu$
zpxna>Bp(<5@pa}^Nhmbq<~%Sp%qV>)^C{Q{dIiG5JZLCi1NKW*$s;wrZM>5AdxGg-
za&|C{gnm#rw4S;gZcl^vge}lz?lQU>vB-^WkxFO>1IQRp=YO8zg^NO6O6Y^46>)66
zA=S4IQ-9#&S>Cm8diC;<$c#r~!^HP3xJo?sRb1KD&=DSdcEOP_GdXOY1(%z3It@oR
zO=P{wF0Q?r3&>p5Ko&Dpf?o$Z-J=i`+Ti9&avl=7xzpg9IbVI^#Cc^;_p}i^Ym#Or
zJ|6M*SMG94i=$Czi>l@ITFnVB|Buz1fA6)wBl6zejzk&kn--Vv(1_4_WKvP8_dbfm
z>6>EK0kHjQeip=gxtP*xxJg7Vh>d<l>_;4^<hgd5?n+Z*yC^Y;h^&cz&3t}g?^`UR
z%s1cMe+wn<AvCNF=%D^7$FQ3&-GI)k88@+o#KybW;_rOoG(}!J4t&%DD>tjFJhquw
zB<la6NVig<Vp?~8TXiE=Hv=F#V#ckGIF^9VP=9A1hce$!s?yU9yfU=OE1%Vfb!^0e
z=1I3Y$gMzXz91KdZBW?FElF_8bh*Ga6p8(ypTSh$*_(Qw{vzAIRDgsCq*T2EXqj0u
z=scjgI~$sV0~KWxQLBA3yaV)NKP}t2+uit~K4E9i-!|qktm4x-Fg5u}T}JxA=t^4z
z(@-#t*d<)V=U;>6uOXYAw@PVP5Zv7Pv`-@DJNF><*5(!HO;^NzLWd`GH(Tsj!<q0A
z9ngWiPzHZ7ANpcw&V4d=ybPQY4&uK%E|WwPk`*8<mQc_V5dgv5wz7F1cX5>m+_l{m
zR5-k8oBh0O<KJiMq%BfUfe-O2jb5fueSy^`LfiVuQyqU|XlYxR2P&6?+i4f2zc?cn
z?7Kt-bbl9Ii5(gA?VNTb+G6Qou|aRD(rGUepkT}%AO0*}sZBYdA^4Ko4pb1dLr*Cw
zA72aV-;S-3h~sxQ(Q2MzQU|>zCB`M=yzCc{?yZn3LsK0zB-3F#qJR8v?dDic3ERy%
zGF?|P4-_;IISpRW_zq2wSwPtUv1CqDnv|TDA$e(x2sG**b)aO!iay~n;0dZ1mucf9
z1~6lw=>*kl<gH)L?q}{Gyf#l9*N;uJ#kM7)1Jk*v)J@7?aj~~r`#<c@zpsN^Qs%Su
zH!CraN%jiko%vhS>e&k>#9c8HRpcBxd~74(H{}&bnZ93{{lxw@__ax4=-EJ~w;k1E
zCamx5aul6W7FvV_ZVw&po>E=I3`%?4Dk7`)jH2!Lnda|qn@>yg-<7CXmG2ko{~Z5b
z_anYa4E>wRD}C;fVhT7f?ou$lT#o2fpqUyhn$0wJm#=qd__AXr&|?uM;hLj*pu*zP
zKEZ466v8q2WrA=?;q`8s<U1j)zY`h5p5sk)GvAQ^tC8!BYI51yp$IC)0*F!%LZk_Z
z^v(e(0i>uwIwA^Jf*eo;=}qt=J%T6*s6>I#g^*AJC|!C&5)56C9zd$}J7Ilyo%?<3
z`*Hu|U76Xl_ny6HKkqYZ%{z@&C&MAE<K$=KnT6CCjC$ybmcK1hTRp^@lI0TOyR(>D
z6*Tfj^LK*T3)~WWtv+sTk8SwM=Fn*Kiu+c13Csy0A&;`RMsEu0P@cHkP=4ITcy06H
ziB)3=Xv6C-HKzE1m6Xi7(xQS1#!3hSXQP?Dm3=YdS7FVd|DQ>48OjSwj9t|;OrY(@
zb1Vm%lQf?6wE0}3yLeXg&aEap7-9Z?+L9RsxF)sIO(HuE`R<MH=DIZ}DvdShvN6R~
zJ|Ge4s_Zkd)kF5!;DqVz1|^TkXkHsclB9f!h@femul$%^gL677i?13V+`?H{Sl~{~
zd4vqSu}d(#Ra1P9<CHRu3qd0|6ij~B9ruCFIMIURqC7J2mTN^~@O7iynY``6L(3cY
zGYXfh$jbMB{Vo@;t+~qf$iTqkp_jFz1j<IUWc9^K{|k-H*6`bfI(+zCHodE{47%2i
zi?vt0<c5O}83nb2E;^e&z8D3;n%{Vt&X*HN{H@oZGZ{C^h*Mxzh1kL!tmj9S7*uQ#
zW6G_H()fihTAuN0F2#<MZy>jA^26c`;cX?%Zp<@{42|I8g)cRQ3GzB?CQbI4nOs}^
zwAnjyj6hAhaK{=|yJK8ZBSHQ-zHjrB_tOCmD#Ph<$R9bL$kdML&G`iR^muulj1I%C
z_09SIDUQLT-clPxkQhF7g80ffu#@di^O}H%xZITE8r*+0GU$#z!>J%eWQq1p@yyk7
z@$%yF@)-YIyeRL=#&$6BY=z~5o{0YikX^_sv>AU<Qf#h5Qgh!0RtPnSm$#*{#I;P?
zIo$Zr5)uBb)63~jbspA#;A;5dkE<6U<5{OEG6h5DiRfm`m{9gtxB72x{B5<<_Py1Q
z7j?&FKd^E_>UgIziZE+pv}9kyxiLRB-RyVtR{p6BqfnLBRMWs41kI7UFCi(Y-8N0y
zXh9trJ@0Xn6HI#e{+ifabGGx1*vn-9m?noLRJNz*uK9Auar&a?WNba$+Y-)izRckb
z(G6#E%P~c+dzy--y>^ou&|Dz%Lh2%6><|b91Qr4T>L7F}@NGaN0|WvTbiIS79yNKD
z=74(yb;I>JVNQ-hN1~|QkTx$R(qG-*HWP(|ooO0=?jJ`$b-=63%y@TYXKNISo^*nH
zG4`Z1xOGRz&DmEX<(@2XRHsB$Em2>c{T!Nup}bjken&6XD@IS5lK+~Gh~j0x5&4E9
zIQJ;%bB(AY+QCM7zC7jUreG}wQ~|Ye{sCO}GCyfoiNNF!s$@RVw?KoS^x)JO<~Q8m
zK8>w&*r|5Fsj9rQprO>h-Vj*{Ds`)TfF`K(TU&z{q^~_vu5M7m)$C#YVz_kw*jyH6
zYoSG}J7d{7nY`1h=+(!%?~Y_BeLOH}p|ocytw*vugSK9%rM+ZsGk5fl@trowp##j4
zaXJ7HWm=W6cF@;-EUxGlZQ>ea@7-ZMz1Nj>?4~~|&f)@uQ6TvggE?WZP9k(~%fk)}
z$8aDpSIQ)cXSmbT&|uiH`f4f8SMw(8L=BIomA*2n{~|f~5KFfsRA{IDk>=nZ^Q3$^
zGTY<#`;)<s!B}GeDgHsf=-DyLF-^TQ?GWco`f7A;gb?S4)g<Dqq&*5rf^&3Kp~uTH
z_Q31gpO#2~KqDdp!0(tW7e7Vd*p+Vcz)Hup)%U+!?teVF{D8dDw}R78pgjM~AD;bS
zc$V|MNewLkLCJa#!zye=R3lL#;-}`i(zLhSKf5?U+LgNWTH-6np<eOZW(5D^-U_*m
zD!0M(p6rbW(SsHtcjg%#WW%hiGy2sY<mywVzLbI$;}{!?CO6TRPA_R5GOEPyS8YWC
zcqfGvA&?Y4LhxZ^bAY!-XbBME6XGLp?#S#oksfIgaq^`)?q86|uxthQ9{gU?OP%_I
zorTgsR+f5G=y7jdWyjS!;nMASf@L47?3OEOl$Lt7*tJ$YlE?S6=9=fwJx^^aK)hw_
zzk+l7>Mo}SZ*m|=pP@yhPAMo1pzN)oGHZ|wY7HWjo`YXe%Ls$~ovPj(utK9F4CtZ6
z)DM*ucMlmpEX9Ev@tW*cJNM)|BmdM|@6+G=>!o<2!9{MDa|&cTbod*c(+L6Q1b!g|
zS)5G$SZ&#GKcaCG*@LHUpJOE|2ECuQg`sDY40~r*cWurcA9R$FImyhpIG|aMuCd$i
z&wqcmf=kPLg&9H1Y<=w(un<{4+!xB{0HAgWGd{h`i%Vx<0M(kDshwI$HRZV4s_9-F
z^3qY^2}tFVBNqfhjIzyNIV;EVQG8Ga+zl#lg%j~G31y*4E_x?FJ+vrj7A-%0!qto;
z4U}|~C$`s>CYfCMawJmn8MX}N1rtw@4>K?T7Oo%wSijl|oUkUcJ!fwkDnvS^G_&gh
zxR^@d)g)&#!i)xG+8Z}hSE|PBc2A`;gA%LAz*N~2ScMSQedjBZ@KGJ-U7gHrgN+EE
zzORmEe03p_C>BGTiG~-YE=wWB91{)UPF5*ter-Q|c+T+@<Y8|oPUhEr&?|pBOvK2R
z=J!P;a%x-t(I_&rSByGbvB?bejNiUc%fj^xn}u{Tk*$Z$*>;i$ZE)U`lFF*HEI3ce
z^<5r*fm;DrB;5bezBa)D#=6@gbkf(kU`;{@s=1y(>xsWgjrr7+H%;txGMk3_jhks5
zCNvm-faksKe~5i(fkLTe0Tn0XUJPfK4QvDk29dZqYqjok#R8PKy*cSsC@J7~m|a5~
z_&G3U6uA9OT57oM1~#3#@Z?=?(|%BZfdN%SVFUkZ)8^IfTlppRPCt2gu$q2qd{2hT
zKYudp={qef1=3d`Qesb75b~s&yLP#Nvp!#kWMLXN5=Uez)apQ7Htg&(ITuF?^%}wm
z8-i{9xe?)aqoV9=;U6hfs-c#X5IY)E{hcP6*`8v2EpbvpSvCJ=R+c(n?CiiC@c|Y;
zGpx2XwtA<JBb?3*&=og#PYDh5Pa*2a<WJ;jw-y#wjOuA`N>)3F@W%Sa+%5M!*;DN8
zK5wmh<67>j?-V-Io51Wxs#KzzR=0y+aB$AEzE2itn}~`&;UmuSRI3?(H-1|IwjqSD
z!^^uBpssf;P`WD_VLQFEc>;>IMa!xa#Rj?Jqc*Nx!fx*p4#nq|i!rt|k;Im}=)zJ;
zYYdA_&v-qWpIn+Q79s~#yq>0@i2CGhGsMq7(x{T=V%6p|KpZM$3flTMv-Y68H!lqi
z-@##TOBbTWU=8LhMWE*srrSA3yST=(y}0|7o`}lv#LK)p`^d4J<3;7K@BYHc8s(~!
zzDr8VS86Ix0R}O3_n-Fe@o&Tn2(H)dFM=2IS`t`CoVz`#7h!=Kd+Mxa=N2>)a}v9K
zkq5pp)a^Pi&Pg}%#+FH`Q!G!!&id$oAF0Hb%b&?tS3lcS5fN^enfY^V`x7_w$L3^r
z^y(c;?HmiEv5?!P7)*hrfb~t)B8~-0ZK;lxgl1oVN<Z^6o6DDAxug1*CeAafHg<|E
zop9}XF2wHhIBizY;bHuCG>S=qdx`}*ckebYYp0?;+>u817kDoAa9#K2;qBO1E?c+|
z`+Q@zmXve7gOTVNQnVhHL;gFb4+z>5!2f{3eL+F$!QR>h?3b-ZCL{{13M_}&&oA%H
zk3}I<&tF}d?RyD_bHzyK^ybmtTN+OI-3v;bH0}_Y1Fof|{bFdwsj_ma@%-?89Fs%g
zHl|*Z^oUpUjF)WaEDg&Nz1KQl3nLu#KSwJmtUOM__^pjd#PrS(K{ja?n^zOo1+vsb
z>=Gw&O^I(zW}5dud8_&$8zymI^2*BkI9!FdHee=irBx;nYKg(xIYB~iCskudTV9;R
zlgZQO!KxX<J-MQ#N*!up7S`}NW3eCRPxHZA`IeCZTd);kuW9-OChhhu67t-Z_lJ|!
z-FA<`-0PPuKNu~!x|$`)e@eqEFC@Vb#rx$bXR*V3tqyEn)@Zh9?zlG6OHJn@S#Cm1
zcP0jhx@Xc9_Om0qq{HpKm)WMLjI5*&Wn{4L4-gbk9sbj^PfzZL%5XAQf{N9`<Cm%*
z#F?6$t<Yy<!{E!UqC*?1RFu&m!7f%Q%ATWd$(FR3jfD{!?>Qc5%q^m7SP7=u7C>29
zzOm886iJQ~q7CO|t)9Kr^yboX*iBVE)xA)kIv_fS{H)kqbNTdB2ZCCRWuEh+<CqUR
zd|ElJKGjA9V>z2l%w|AoUDExC@DUQ>17ybfC#if$IN^~7*7n8Y@#I3hkQVaQrRBsG
zipkr!Y5113&yv$e)Wg=-y@rfgn9b{((?#mC*z>MMC>bPQf_$sp&}(U$PG7(L(qbDZ
z8CvnTs@TB3z@9AA1p>4JZwLUq`XIWx<2+Xi_p7Z-s_`=#9XWxnUHnLGs}KHOmV|&L
zhgZxubuMA~T&u)*4|~q9rD6PxWGh3Ob~Y5QR`?cmlpHfk0Ph^PHgK{U1&FkC>g;xY
zS2?!qnxy`h*4J_`UA)8w9Lvc<t`;+?oG{U`RXIM_1CrgYMd~qN9;ej)MB2x(>Jc<>
zDl%i7u}H}8@S&dLwb<+$IiwD=#$Up5VXpDj@0)4k@nSv3tbnjy+FliyOWCbLaUioT
zz0CYxcOlD{MK3Id-dJrHO&?s#*5C~Dh`*4mXhb+8#R!-aYU|ieHR=-VdWTbWGU}tr
zdKZKL<ase110cTsD+biu<}M3p&@}l{)9?-GCGM%u0gEnlQe2_c6(ZY_wUEQ^TX=)f
zG3D&c`!j$vgF0@C@EX_{MCzAxA|t$<R>diif^>>D1<!x{+@TG;q4Sy-$ti^iveyqx
z8=2@-J1M3Xww#^J9>B-7(rRB%T)Exs^rw2y!L-T?r+PCH5W(<*m^uxY_<$?W;|ErR
zs9S<uZvgpeU|-)FGli~t<b)aUfUm9UcupTm%<Z*5T6NoO#2pbLp78IJ77~EMgK&bM
zzsWri|C`+Z57++>8AbuneI#Sdq*B4^mI5b{xEbgzkD8${o>Y;-jbo7BD}dqvxnz9+
z5=XlP@aON5e;fN>JBfeG966-$Eb}|l>#MF^lSrY45mz!bo#-h3asvVx?h<OuI)+P!
z;uXzjug;ac><ENFVl2T-j=XA@?TKLW7Vohjs=p~>g#;u$9tV0KvFqo9$A4q^NX9JO
z{sBWp<7eGFb0J$JTbo*2!%gmi86j)kDr3VqpGl+r{#huR0c7uvy4KUl)&wNg@}DMV
z6|O1&6w@+>(sOsuV0L#b@A&r<-jH$0w@l!u&(UQ6-%uXyrT*Sp9mxRQ{|?B}#QtmF
zkMtZ3IZ}J1=RZsnmxN3pT|yJQ&Zy&bmy$&fObI<<Nrna2Hs{$O*)BzxTOgQ!Joo0L
gr~3b_jz^?JrV(@=j@;LB4fH}_2IkjF^zKCd3(G2*J^%m!

literal 0
HcmV?d00001

diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index 55f8df1abd40..5dd2c88fa2bd 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -240,6 +240,7 @@ Congrats! You just wrote your first KUnit test.
 Next Steps
 ==========
 
+*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
 *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
                   ` (2 preceding siblings ...)
  2021-12-07  5:40 ` [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07 17:33   ` Tim.Bird
  2021-12-07  5:40 ` [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests Harinder Singh
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Consolidate documentation running tests into two pages: "run tests with
kunit_tool" and "run tests without kunit_tool".

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/index.rst       |   4 +
 Documentation/dev-tools/kunit/run_manual.rst  |  57 ++++
 Documentation/dev-tools/kunit/run_wrapper.rst | 247 ++++++++++++++++++
 Documentation/dev-tools/kunit/start.rst       |   4 +-
 4 files changed, 311 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/dev-tools/kunit/run_manual.rst
 create mode 100644 Documentation/dev-tools/kunit/run_wrapper.rst

diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index 75e4ae85adbb..c0d1fd749cd2 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -10,6 +10,8 @@ KUnit - Linux Kernel Unit Testing
 
 	start
 	architecture
+	run_wrapper
+	run_manual
 	usage
 	kunit-tool
 	api/index
@@ -98,6 +100,8 @@ How do I use it?
 
 *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
 *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
+*   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
+*   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
 *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
diff --git a/Documentation/dev-tools/kunit/run_manual.rst b/Documentation/dev-tools/kunit/run_manual.rst
new file mode 100644
index 000000000000..71e6d6623f88
--- /dev/null
+++ b/Documentation/dev-tools/kunit/run_manual.rst
@@ -0,0 +1,57 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============================
+Run Tests without kunit_tool
+============================
+
+If we do not want to use kunit_tool (For example: we want to integrate
+with other systems, or run tests on real hardware), we can
+include KUnit in any kernel, read out results, and parse manually.
+
+.. note:: KUnit is not designed for use in a production system. It is
+          possible that tests may reduce the stability or security of
+          the system.
+
+Configure the Kernel
+====================
+
+KUnit tests can run without kunit_tool. This can be useful, if:
+
+- We have an existing kernel configuration to test.
+- Need to run on real hardware (or using an emulator/VM kunit_tool
+  does not support).
+- Wish to integrate with some existing testing systems.
+
+KUnit is configured with the ``CONFIG_KUNIT`` option, and individual
+tests can also be built by enabling their config options in our
+``.config``. KUnit tests usually (but don't always) have config options
+ending in ``_KUNIT_TEST``. Most tests can either be built as a module,
+or be built into the kernel.
+
+.. note ::
+
+	We can enable the ``KUNIT_ALL_TESTS`` config option to
+	automatically enable all tests with satisfied dependencies. This is
+	a good way of quickly testing everything applicable to the current
+	config.
+
+Once we have built our kernel (and/or modules), it is simple to run
+the tests. If the tests are built-in, then will run automatically on the
+kernel boot. The results will be written to the kernel log (``dmesg``)
+in TAP format.
+
+If the tests are built as modules, they will run when the module is
+loaded.
+
+.. code-block :: bash
+
+	# modprobe example-test
+
+The results will appear in TAP format in ``dmesg``.
+
+.. note ::
+
+	If ``CONFIG_KUNIT_DEBUGFS`` is enabled, KUnit test results will
+	be accessible from the ``debugfs`` filesystem (if mounted).
+	They will be in ``/sys/kernel/debug/kunit/<test_suite>/results``, in
+	TAP format.
diff --git a/Documentation/dev-tools/kunit/run_wrapper.rst b/Documentation/dev-tools/kunit/run_wrapper.rst
new file mode 100644
index 000000000000..c5d2e86c6058
--- /dev/null
+++ b/Documentation/dev-tools/kunit/run_wrapper.rst
@@ -0,0 +1,247 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================
+Run Tests with kunit_tool
+=========================
+
+We can either run KUnit tests using kunit_tool or can run tests
+manually, and then use kunit_tool to parse the results. To run tests
+manually, see: Documentation/dev-tools/kunit/run_manual.rst.
+As long as we can build the kernel, we can run KUnit.
+
+kunit_tool is a Python script which configures and builds a kernel, runs
+tests, and formats the test results.
+
+Run command:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run
+
+We should see the following:
+
+.. code-block::
+
+	Generating .config...
+	Building KUnit kernel...
+	Starting KUnit kernel...
+
+We may want to use the following options:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run --timeout=30 --jobs=`nproc --all
+
+- ``--timeout`` sets a maximum amount of time for tests to run.
+- ``--jobs`` sets the number of threads to build the kernel.
+
+kunit_tool will generate a ``.kunitconfig`` with a default
+configuration, if no other ``.kunitconfig`` file exists
+(in the build directory). In addition, it verifies that the
+generated ``.config`` file contains the ``CONFIG`` options in the
+``.kunitconfig``.
+It is also possible to pass a separate ``.kunitconfig`` fragment to
+kunit_tool. This is useful if we have several different groups of
+tests we want to run independently, or if we want to use pre-defined
+test configs for certain subsystems.
+
+To use a different ``.kunitconfig`` file (such as one
+provided to test a particular subsystem), pass it as an option:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run --kunitconfig=fs/ext4/.kunitconfig
+
+To view kunit_tool flags (optional command-line arguments), run:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run --help
+
+Create a  ``.kunitconfig`` File
+===============================
+
+If we want to run a specific set of tests (rather than those listed
+in the KUnit ``defconfig``), we can provide Kconfig options in the
+``.kunitconfig`` file. For default .kunitconfig, see:
+https://elixir.bootlin.com/linux/v5.14-rc3/source/tools/testing/kunit/configs/default.config
+A ``.kunitconfig`` is a ``minconfig`` (a .config
+generated by running ``make savedefconfig``), used for running a
+specific set of tests. This file contains the regular Kernel configs
+with specific test targets. The ``.kunitconfig`` also
+contains any other config options required by the tests (For example:
+dependencies for features under tests, configs that enable/disable
+certain code blocks, arch configs and so on).
+
+To create a ``.kunitconfig``, using the KUnit ``defconfig``:
+
+.. code-block::
+
+	cd $PATH_TO_LINUX_REPO
+	cp tools/testing/kunit/configs/default.config .kunit/.kunitconfig
+
+We can then add any other Kconfig options. For example:
+
+.. code-block::
+
+	CONFIG_LIST_KUNIT_TEST=y
+
+kunit_tool ensures that all config options in ``.kunitconfig`` are
+set in the kernel ``.config`` before running the tests. It warns if we
+have not included the options dependencies.
+
+.. note:: Removing something from the ``.kunitconfig`` will
+   not rebuild the ``.config file``. The configuration is only
+   updated if the ``.kunitconfig`` is not a subset of ``.config``.
+   This means that we can use other tools
+   (For example: ``make menuconfig``) to adjust other config options.
+   The build dir needs to be set for ``make menuconfig`` to
+   work, therefore  by default use ``make O=.kunit menuconfig``.
+
+Configure, Build, and Run Tests
+===============================
+
+If we want to make manual changes to the KUnit build process, we
+can run part of the KUnit build process independently.
+When running kunit_tool, from a ``.kunitconfig``, we can generate a
+``.config`` by using the ``config`` argument:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py config
+
+To build a KUnit kernel from the current ``.config``, we can use the
+``build`` argument:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py build
+
+If we already have built UML kernel with built-in KUnit tests, we
+can run the kernel, and display the test results with the ``exec``
+argument:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py exec
+
+The ``run`` command discussed in section: **Run Tests with kunit_tool**,
+is equivalent to running the above three commands in sequence.
+
+Parse Test Results
+==================
+
+KUnit tests output displays results in TAP (Test Anything Protocol)
+format. When running tests, kunit_tool parses this output and prints
+a summary. To see the raw test results in TAP format, we can pass the
+``--raw_output`` argument:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run --raw_output
+
+If we have KUnit results in the raw TAP format, we can parse them and
+print the human-readable summary with the ``parse`` command for
+kunit_tool. This accepts a filename for an argument, or will read from
+standard input.
+
+.. code-block:: bash
+
+	# Reading from a file
+	./tools/testing/kunit/kunit.py parse /var/log/dmesg
+	# Reading from stdin
+	dmesg | ./tools/testing/kunit/kunit.py parse
+
+Run Selected Test Suites
+========================
+
+By passing a bash style glob filter to the ``exec`` or ``run``
+commands, we can run a subset of the tests built into a kernel . For
+example: if we only want to run KUnit resource tests, use:
+
+.. code-block::
+
+	./tools/testing/kunit/kunit.py run 'kunit-resource*'
+
+This uses the standard glob format with wildcard characters.
+
+Run Tests on qemu
+=================
+
+kunit_tool supports running tests on  qemu as well as
+via UML. To run tests on qemu, by default it requires two flags:
+
+- ``--arch``: Selects a configs collection (Kconfig, qemu config options
+  and so on), that allow KUnit tests to be run on the specified
+  architecture in a minimal way. The architecture argument is same as
+  the option name passed to the ``ARCH`` variable used by Kbuild.
+  Not all architectures currently support this flag, but we can use
+  ``--qemu_config`` to handle it. If ``um`` is passed (or this flag
+  is ignored), the tests will run via UML. Non-UML architectures,
+  for example: i386, x86_64, arm and so on; run on qemu.
+
+- ``--cross_compile``: Specifies the Kbuild toolchain. It passes the
+  same argument as passed to the ``CROSS_COMPILE`` variable used by
+  Kbuild. As a reminder, this will be the prefix for the toolchain
+  binaries such as GCC. For example:
+
+  - ``sparc64-linux-gnu`` if we have the sparc toolchain installed on
+    our system.
+
+  - ``$HOME/toolchains/microblaze/gcc-9.2.0-nolibc/microblaze-linux/bin/microblaze-linux``
+    if we have downloaded the microblaze toolchain from the 0-day
+    website to a directory in our home directory called toolchains.
+
+If we want to run KUnit tests on an architecture not supported by
+the ``--arch`` flag, or want to run KUnit tests on qemu using a
+non-default configuration; then we can write our own``QemuConfig``.
+These ``QemuConfigs`` are written in Python. They have an import line
+``from..qemu_config import QemuArchParams`` at the top of the file.
+The file must contain a variable called ``QEMU_ARCH`` that has an
+instance of ``QemuArchParams`` assigned to it. See example in:
+``tools/testing/kunit/qemu_configs/x86_64.py``.
+
+Once we have a ``QemuConfig``, we can pass it into kunit_tool,
+using the ``--qemu_config`` flag. When used, this flag replaces the
+``--arch`` flag. For example: using
+``tools/testing/kunit/qemu_configs/x86_64.py``, the invocation appear
+as
+
+.. code-block:: bash
+
+	./tools/testing/kunit/kunit.py run \
+		--timeout=60 \
+		--jobs=12 \
+		--qemu_config=./tools/testing/kunit/qemu_configs/x86_64.py
+
+To run existing KUnit tests on non-UML architectures, see:
+Documentation/dev-tools/kunit/non_uml.rst.
+
+Command-Line Arguments
+======================
+
+kunit_tool has a number of other command-line arguments which can
+be useful for our test environment. Below the most commonly used
+command line arguments:
+
+- ``--help``: Lists all available options. To list common options,
+  place ``--help`` before the command. To list options specific to that
+  command, place ``--help`` after the command.
+
+  .. note:: Different commands (``config``, ``build``, ``run``, etc)
+            have different supported options.
+- ``--build_dir``: Specifies kunit_tool build directory. It includes
+  the ``.kunitconfig``, ``.config`` files and compiled kernel.
+
+- ``--make_options``: Specifies additional options to pass to make, when
+  compiling a kernel (using ``build`` or ``run`` commands). For example:
+  to enable compiler warnings, we can pass ``--make_options W=1``.
+
+- ``--alltests``: Builds a UML kernel with all config options enabled
+  using ``make allyesconfig``. This allows us to run as many tests as
+  possible.
+
+  .. note:: It is slow and prone to breakage as new options are
+            added or modified. Instead, enable all tests
+            which have satisfied dependencies by adding
+            ``CONFIG_KUNIT_ALL_TESTS=y`` to your ``.kunitconfig``.
diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index 5dd2c88fa2bd..af13f443c976 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -20,7 +20,7 @@ can run kunit_tool:
 	./tools/testing/kunit/kunit.py run
 
 For more information on this wrapper, see:
-Documentation/dev-tools/kunit/kunit-tool.rst.
+Documentation/dev-tools/kunit/run_wrapper.rst.
 
 Creating a ``.kunitconfig``
 ---------------------------
@@ -241,6 +241,8 @@ Next Steps
 ==========
 
 *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
+*   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
+*   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
 *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
                   ` (3 preceding siblings ...)
  2021-12-07  5:40 ` [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07 18:28   ` Tim.Bird
  2021-12-07  5:40 ` [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page Harinder Singh
  2021-12-07  5:40 ` [PATCH v2 7/7] Documentation: KUnit: Restyled Frequently Asked Questions Harinder Singh
  6 siblings, 1 reply; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

We now have dedicated pages on running tests. Therefore refocus the
usage page on writing tests and add content from tips page and
information on other architectures.

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/index.rst |   2 +-
 Documentation/dev-tools/kunit/start.rst |   2 +-
 Documentation/dev-tools/kunit/usage.rst | 570 ++++++++++--------------
 3 files changed, 247 insertions(+), 327 deletions(-)

diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
index c0d1fd749cd2..76c9704d6a1a 100644
--- a/Documentation/dev-tools/kunit/index.rst
+++ b/Documentation/dev-tools/kunit/index.rst
@@ -102,7 +102,7 @@ How do I use it?
 *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
 *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
 *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
-*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
+*   Documentation/dev-tools/kunit/usage.rst - write tests.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
 *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
index af13f443c976..a858ab009944 100644
--- a/Documentation/dev-tools/kunit/start.rst
+++ b/Documentation/dev-tools/kunit/start.rst
@@ -243,7 +243,7 @@ Next Steps
 *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
 *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
 *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
-*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
+*   Documentation/dev-tools/kunit/usage.rst - write tests.
 *   Documentation/dev-tools/kunit/tips.rst - best practices with
     examples.
 *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst
index 63f1bb89ebf5..b321877797f0 100644
--- a/Documentation/dev-tools/kunit/usage.rst
+++ b/Documentation/dev-tools/kunit/usage.rst
@@ -1,57 +1,13 @@
 .. SPDX-License-Identifier: GPL-2.0
 
-===========
-Using KUnit
-===========
-
-The purpose of this document is to describe what KUnit is, how it works, how it
-is intended to be used, and all the concepts and terminology that are needed to
-understand it. This guide assumes a working knowledge of the Linux kernel and
-some basic knowledge of testing.
-
-For a high level introduction to KUnit, including setting up KUnit for your
-project, see Documentation/dev-tools/kunit/start.rst.
-
-Organization of this document
-=============================
-
-This document is organized into two main sections: Testing and Common Patterns.
-The first covers what unit tests are and how to use KUnit to write them. The
-second covers common testing patterns, e.g. how to isolate code and make it
-possible to unit test code that was otherwise un-unit-testable.
-
-Testing
-=======
-
-What is KUnit?
---------------
-
-"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
-Framework." KUnit is intended first and foremost for writing unit tests; it is
-general enough that it can be used to write integration tests; however, this is
-a secondary goal. KUnit has no ambition of being the only testing framework for
-the kernel; for example, it does not intend to be an end-to-end testing
-framework.
-
-What is Unit Testing?
----------------------
-
-A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
-tests code at the smallest possible scope, a *unit* of code. In the C
-programming language that's a function.
-
-Unit tests should be written for all the publicly exposed functions in a
-compilation unit; so that is all the functions that are exported in either a
-*class* (defined below) or all functions which are **not** static.
-
 Writing Tests
--------------
+=============
 
 Test Cases
-~~~~~~~~~~
+----------
 
 The fundamental unit in KUnit is the test case. A test case is a function with
-the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
+the signature ``void (*)(struct kunit *test)``. It calls the function under test
 and then sets *expectations* for what should happen. For example:
 
 .. code-block:: c
@@ -65,18 +21,19 @@ and then sets *expectations* for what should happen. For example:
 		KUNIT_FAIL(test, "This test never passes.");
 	}
 
-In the above example ``example_test_success`` always passes because it does
-nothing; no expectations are set, so all expectations pass. On the other hand
-``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
-a special expectation that logs a message and causes the test case to fail.
+In the above example, ``example_test_success`` always passes because it does
+nothing; no expectations are set, and therefore all expectations pass. On the
+other hand ``example_test_failure`` always fails because it calls ``KUNIT_FAIL``,
+which is a special expectation that logs a message and causes the test case to
+fail.
 
 Expectations
 ~~~~~~~~~~~~
-An *expectation* is a way to specify that you expect a piece of code to do
-something in a test. An expectation is called like a function. A test is made
-by setting expectations about the behavior of a piece of code under test; when
-one or more of the expectations fail, the test case fails and information about
-the failure is logged. For example:
+An *expectation* specifies that we expect a piece of code to do something in a
+test. An expectation is called like a function. A test is made by setting
+expectations about the behavior of a piece of code under test. When one or more
+expectations fail, the test case fails and information about the failure is
+logged. For example:
 
 .. code-block:: c
 
@@ -86,29 +43,28 @@ the failure is logged. For example:
 		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
 	}
 
-In the above example ``add_test_basic`` makes a number of assertions about the
-behavior of a function called ``add``; the first parameter is always of type
-``struct kunit *``, which contains information about the current test context;
-the second parameter, in this case, is what the value is expected to be; the
+In the above example, ``add_test_basic`` makes a number of assertions about the
+behavior of a function called ``add``. The first parameter is always of type
+``struct kunit *``, which contains information about the current test context.
+The second parameter, in this case, is what the value is expected to be. The
 last value is what the value actually is. If ``add`` passes all of these
 expectations, the test case, ``add_test_basic`` will pass; if any one of these
 expectations fails, the test case will fail.
 
-It is important to understand that a test case *fails* when any expectation is
-violated; however, the test will continue running, potentially trying other
-expectations until the test case ends or is otherwise terminated. This is as
-opposed to *assertions* which are discussed later.
+A test case *fails* when any expectation is violated; however, the test will
+continue to run, and try other expectations until the test case ends or is
+otherwise terminated. This is as opposed to *assertions* which are discussed
+later.
 
-To learn about more expectations supported by KUnit, see
-Documentation/dev-tools/kunit/api/test.rst.
+To learn about more KUnit expectations, see Documentation/dev-tools/kunit/api/test.rst.
 
 .. note::
-   A single test case should be pretty short, pretty easy to understand,
-   focused on a single behavior.
+   A single test case should be short, easy to understand, and focused on a
+   single behavior.
 
-For example, if we wanted to properly test the add function above, we would
-create additional tests cases which would each test a different property that an
-add function should have like this:
+For example, if we want to rigorously test the ``add`` function above, create
+additional tests cases which would test each property that an ``add`` function
+should have as shown below:
 
 .. code-block:: c
 
@@ -134,56 +90,43 @@ add function should have like this:
 		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
 	}
 
-Notice how it is immediately obvious what all the properties that we are testing
-for are.
-
 Assertions
 ~~~~~~~~~~
 
-KUnit also has the concept of an *assertion*. An assertion is just like an
-expectation except the assertion immediately terminates the test case if it is
-not satisfied.
-
-For example:
+An assertion is like an expectation, except that the assertion immediately
+terminates the test case if the condition is not satisfied. For example:
 
 .. code-block:: c
 
-	static void mock_test_do_expect_default_return(struct kunit *test)
+	static void test_sort(struct kunit *test)
 	{
-		struct mock_test_context *ctx = test->priv;
-		struct mock *mock = ctx->mock;
-		int param0 = 5, param1 = -5;
-		const char *two_param_types[] = {"int", "int"};
-		const void *two_params[] = {&param0, &param1};
-		const void *ret;
-
-		ret = mock->do_expect(mock,
-				      "test_printk", test_printk,
-				      two_param_types, two_params,
-				      ARRAY_SIZE(two_params));
-		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
-		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
+		int *a, i, r = 1;
+		a = kunit_kmalloc_array(test, TEST_LEN, sizeof(*a), GFP_KERNEL);
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a);
+		for (i = 0; i < TEST_LEN; i++) {
+			r = (r * 725861) % 6599;
+			a[i] = r;
+		}
+		sort(a, TEST_LEN, sizeof(*a), cmpint, NULL);
+		for (i = 0; i < TEST_LEN-1; i++)
+			KUNIT_EXPECT_LE(test, a[i], a[i + 1]);
 	}
 
-In this example, the method under test should return a pointer to a value, so
-if the pointer returned by the method is null or an errno, we don't want to
-bother continuing the test since the following expectation could crash the test
-case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
-the appropriate conditions have not been satisfied to complete the test.
+In this example, the method under test should return pointer to a value. If the
+pointer returns null or an errno, we want to stop the test since the following
+expectation could crash the test case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us
+to bail out of the test case if the appropriate conditions are not satisfied to
+complete the test.
 
 Test Suites
 ~~~~~~~~~~~
 
-Now obviously one unit test isn't very helpful; the power comes from having
-many test cases covering all of a unit's behaviors. Consequently it is common
-to have many *similar* tests; in order to reduce duplication in these closely
-related tests most unit testing frameworks - including KUnit - provide the
-concept of a *test suite*. A *test suite* is just a collection of test cases
-for a unit of code with a set up function that gets invoked before every test
-case and then a tear down function that gets invoked after every test case
-completes.
-
-Example:
+We need many test cases covering all the unit's behaviors. It is common to have
+many similar tests. In order to reduce duplication in these closely related
+tests, most unit testing frameworks (including KUnit) provide the concept of a
+*test suite*. A test suite is a collection of test cases for a unit of code
+with a setup function that gets invoked before every test case and then a tear
+down function that gets invoked after every test case completes. For example:
 
 .. code-block:: c
 
@@ -202,23 +145,48 @@ Example:
 	};
 	kunit_test_suite(example_test_suite);
 
-In the above example the test suite, ``example_test_suite``, would run the test
-cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``;
-each would have ``example_test_init`` called immediately before it and would
-have ``example_test_exit`` called immediately after it.
+In the above example, the test suite ``example_test_suite`` would run the test
+cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``. Each
+would have ``example_test_init`` called immediately before it and
+``example_test_exit`` called immediately after it.
 ``kunit_test_suite(example_test_suite)`` registers the test suite with the
 KUnit test framework.
 
 .. note::
-   A test case will only be run if it is associated with a test suite.
+   A test case will only run if it is associated with a test suite.
 
-``kunit_test_suite(...)`` is a macro which tells the linker to put the specified
-test suite in a special linker section so that it can be run by KUnit either
-after late_init, or when the test module is loaded (depending on whether the
-test was built in or not).
+``kunit_test_suite(...)`` is a macro which tells the linker to put the
+specified test suite in a special linker section so that it can be run by KUnit
+either after ``late_init``, or when the test module is loaded (if the test was
+built as a module).
 
-For more information on these types of things see the
-Documentation/dev-tools/kunit/api/test.rst.
+For more information, see Documentation/dev-tools/kunit/api/test.rst.
+
+Writing Tests For Other Architectures
+-------------------------------------
+
+Always prefer tests that run on UML to tests that only run under a particular
+architecture. In addition, prefer tests that run under QEMU or another easy
+(and monetarily free) to obtain software environment to a specific piece of
+hardware.
+
+Nevertheless, there are still valid reasons to write an architecture or
+hardware specific test. For example, we might want to test code that really
+belongs in ``arch/some-arch/*``. Even so, try to write the test so that it does
+not depend on physical hardware. Some of our test cases may not need hardware,
+only few tests actually require the hardware to test it. When hardware is not
+available, instead of disabling tests, we can skip them.
+
+Now that we have narrowed down exactly what bits are hardware specific, the
+actual procedure for writing and running the tests is same as writing normal
+KUnit tests.
+
+.. important::
+   We may have to reset hardware state. If this is not possible, we may only
+   be able to run one test case per invocation.
+
+.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
+   dependent KUnit test.
 
 Common Patterns
 ===============
@@ -226,43 +194,39 @@ Common Patterns
 Isolating Behavior
 ------------------
 
-The most important aspect of unit testing that other forms of testing do not
-provide is the ability to limit the amount of code under test to a single unit.
-In practice, this is only possible by being able to control what code gets run
-when the unit under test calls a function and this is usually accomplished
-through some sort of indirection where a function is exposed as part of an API
-such that the definition of that function can be changed without affecting the
-rest of the code base. In the kernel this primarily comes from two constructs,
-classes, structs that contain function pointers that are provided by the
-implementer, and architecture-specific functions which have definitions selected
-at compile time.
+Unit testing limits the amount of code under test to a single unit. It controls
+what code gets run when the unit under test calls a function. Where a function
+is exposed as part of an API such that the definition of that function can be
+changed without affecting the rest of the code base. In the kernel, this comes
+from two constructs: classes, structs. that contain function pointers provided
+by the implementer and architecture specific functions which have definitions
+selected at compile time.
 
 Classes
 ~~~~~~~
 
 Classes are not a construct that is built into the C programming language;
-however, it is an easily derived concept. Accordingly, pretty much every project
-that does not use a standardized object oriented library (like GNOME's GObject)
-has their own slightly different way of doing object oriented programming; the
-Linux kernel is no exception.
+however, it is an easily derived concept. Accordingly, in most cases, every
+project that does not use a standardized object oriented library (like GNOME's
+GObject) has their own slightly different way of doing object oriented
+programming; the Linux kernel is no exception.
 
 The central concept in kernel object oriented programming is the class. In the
 kernel, a *class* is a struct that contains function pointers. This creates a
 contract between *implementers* and *users* since it forces them to use the
-same function signature without having to call the function directly. In order
-for it to truly be a class, the function pointers must specify that a pointer
-to the class, known as a *class handle*, be one of the parameters; this makes
-it possible for the member functions (also known as *methods*) to have access
-to member variables (more commonly known as *fields*) allowing the same
-implementation to have multiple *instances*.
-
-Typically a class can be *overridden* by *child classes* by embedding the
-*parent class* in the child class. Then when a method provided by the child
-class is called, the child implementation knows that the pointer passed to it is
-of a parent contained within the child; because of this, the child can compute
-the pointer to itself because the pointer to the parent is always a fixed offset
-from the pointer to the child; this offset is the offset of the parent contained
-in the child struct. For example:
+same function signature without having to call the function directly. To be a
+class, the function pointers must specify that a pointer to the class, known as
+a *class handle*, be one of the parameters. Thus the member functions (also
+known as *methods*) have access to member variables (also known as *fields*)
+allowing the same implementation to have multiple *instances*.
+
+A class can be *overridden* by *child classes* by embedding the *parent class*
+in the child class. Then when the child class *method* is called, the child
+implementation knows that the pointer passed to it is of a parent contained
+within the child. Thus, the child can compute the pointer to itself because the
+pointer to the parent is always a fixed offset from the pointer to the child.
+This offset is the offset of the parent contained in the child struct. For
+example:
 
 .. code-block:: c
 
@@ -290,8 +254,8 @@ in the child struct. For example:
 		self->width = width;
 	}
 
-In this example (as in most kernel code) the operation of computing the pointer
-to the child from the pointer to the parent is done by ``container_of``.
+In this example, computing the pointer to the child from the pointer to the
+parent is done by ``container_of``.
 
 Faking Classes
 ~~~~~~~~~~~~~~
@@ -300,14 +264,11 @@ In order to unit test a piece of code that calls a method in a class, the
 behavior of the method must be controllable, otherwise the test ceases to be a
 unit test and becomes an integration test.
 
-A fake just provides an implementation of a piece of code that is different than
-what runs in a production instance, but behaves identically from the standpoint
-of the callers; this is usually done to replace a dependency that is hard to
-deal with, or is slow.
-
-A good example for this might be implementing a fake EEPROM that just stores the
-"contents" in an internal buffer. For example, let's assume we have a class that
-represents an EEPROM:
+A fake class implements a piece of code that is different than what runs in a
+production instance, but behaves identical from the standpoint of the callers.
+This is done to replace a dependency that is hard to deal with, or is slow. For
+example, implementing a fake EEPROM that stores the "contents" in an
+internal buffer. Assume we have a class that represents an EEPROM:
 
 .. code-block:: c
 
@@ -316,7 +277,7 @@ represents an EEPROM:
 		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
 	};
 
-And we want to test some code that buffers writes to the EEPROM:
+We want to test code that buffers writes to the EEPROM:
 
 .. code-block:: c
 
@@ -329,7 +290,7 @@ And we want to test some code that buffers writes to the EEPROM:
 	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
 	void destroy_eeprom_buffer(struct eeprom *eeprom);
 
-We can easily test this code by *faking out* the underlying EEPROM:
+We can test this code by *faking out* the underlying EEPROM:
 
 .. code-block:: c
 
@@ -456,14 +417,14 @@ We can now use it to test ``struct eeprom_buffer``:
 		destroy_eeprom_buffer(ctx->eeprom_buffer);
 	}
 
-Testing against multiple inputs
+Testing Against Multiple Inputs
 -------------------------------
 
-Testing just a few inputs might not be enough to have confidence that the code
-works correctly, e.g. for a hash function.
+Testing just a few inputs is not enough to ensure that the code works correctly,
+for example: testing a hash function.
 
-In such cases, it can be helpful to have a helper macro or function, e.g. this
-fictitious example for ``sha1sum(1)``
+We can write a helper macro or function. The function is called for each input.
+For example, to test ``sha1sum(1)``, we can write:
 
 .. code-block:: c
 
@@ -475,16 +436,15 @@ fictitious example for ``sha1sum(1)``
 	TEST_SHA1("hello world",  "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed");
 	TEST_SHA1("hello world!", "430ce34d020724ed75a196dfc2ad67c77772d169");
 
+Note the use of the ``_MSG`` version of ``KUNIT_EXPECT_STREQ`` to print a more
+detailed error and make the assertions clearer within the helper macros.
 
-Note the use of ``KUNIT_EXPECT_STREQ_MSG`` to give more context when it fails
-and make it easier to track down. (Yes, in this example, ``want`` is likely
-going to be unique enough on its own).
+The ``_MSG`` variants are useful when the same expectation is called multiple
+times (in a loop or helper function) and thus the line number is not enough to
+identify what failed, as shown below.
 
-The ``_MSG`` variants are even more useful when the same expectation is called
-multiple times (in a loop or helper function) and thus the line number isn't
-enough to identify what failed, like below.
-
-In some cases, it can be helpful to write a *table-driven test* instead, e.g.
+In complicated cases, we recommend using a *table-driven test* compared to the
+helper macro variation, for example:
 
 .. code-block:: c
 
@@ -513,17 +473,18 @@ In some cases, it can be helpful to write a *table-driven test* instead, e.g.
 	}
 
 
-There's more boilerplate involved, but it can:
+There is more boilerplate code involved, but it can:
+
+* be more readable when there are multiple inputs/outputs (due to field names).
 
-* be more readable when there are multiple inputs/outputs thanks to field names,
+  * For example, see ``fs/ext4/inode-test.c``.
 
-  * E.g. see ``fs/ext4/inode-test.c`` for an example of both.
-* reduce duplication if test cases can be shared across multiple tests.
+* reduce duplication if test cases are shared across multiple tests.
 
-  * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256``
+  * For example: if we want to test ``sha256sum``, we could add a ``sha256``
     field and reuse ``cases``.
 
-* be converted to a "parameterized test", see below.
+* be converted to a "parameterized test".
 
 Parameterized Testing
 ~~~~~~~~~~~~~~~~~~~~~
@@ -531,7 +492,7 @@ Parameterized Testing
 The table-driven testing pattern is common enough that KUnit has special
 support for it.
 
-Reusing the same ``cases`` array from above, we can write the test as a
+By reusing the same ``cases`` array from above, we can write the test as a
 "parameterized test" with the following.
 
 .. code-block:: c
@@ -582,193 +543,152 @@ Reusing the same ``cases`` array from above, we can write the test as a
 
 .. _kunit-on-non-uml:
 
-KUnit on non-UML architectures
-==============================
-
-By default KUnit uses UML as a way to provide dependencies for code under test.
-Under most circumstances KUnit's usage of UML should be treated as an
-implementation detail of how KUnit works under the hood. Nevertheless, there
-are instances where being able to run architecture-specific code or test
-against real hardware is desirable. For these reasons KUnit supports running on
-other architectures.
-
-Running existing KUnit tests on non-UML architectures
------------------------------------------------------
+Exiting Early on Failed Expectations
+------------------------------------
 
-There are some special considerations when running existing KUnit tests on
-non-UML architectures:
+We can use ``KUNIT_EXPECT_EQ`` to mark the test as failed and continue
+execution.  In some cases, it is unsafe to continue. We can use the
+``KUNIT_ASSERT`` variant to exit on failure.
 
-*   Hardware may not be deterministic, so a test that always passes or fails
-    when run under UML may not always do so on real hardware.
-*   Hardware and VM environments may not be hermetic. KUnit tries its best to
-    provide a hermetic environment to run tests; however, it cannot manage state
-    that it doesn't know about outside of the kernel. Consequently, tests that
-    may be hermetic on UML may not be hermetic on other architectures.
-*   Some features and tooling may not be supported outside of UML.
-*   Hardware and VMs are slower than UML.
+.. code-block:: c
 
-None of these are reasons not to run your KUnit tests on real hardware; they are
-only things to be aware of when doing so.
+	void example_test_user_alloc_function(struct kunit *test)
+	{
+		void *object = alloc_some_object_for_me();
 
-Currently, the KUnit Wrapper (``tools/testing/kunit/kunit.py``) (aka
-kunit_tool) only fully supports running tests inside of UML and QEMU; however,
-this is only due to our own time limitations as humans working on KUnit. It is
-entirely possible to support other emulators and even actual hardware, but for
-now QEMU and UML is what is fully supported within the KUnit Wrapper. Again, to
-be clear, this is just the Wrapper. The actualy KUnit tests and the KUnit
-library they are written in is fully architecture agnostic and can be used in
-virtually any setup, you just won't have the benefit of typing a single command
-out of the box and having everything magically work perfectly.
+		/* Make sure we got a valid pointer back. */
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, object);
+		do_something_with_object(object);
+	}
 
-Again, all core KUnit framework features are fully supported on all
-architectures, and using them is straightforward: Most popular architectures
-are supported directly in the KUnit Wrapper via QEMU. Currently, supported
-architectures on QEMU include:
+Allocating Memory
+-----------------
 
-*   i386
-*   x86_64
-*   arm
-*   arm64
-*   alpha
-*   powerpc
-*   riscv
-*   s390
-*   sparc
+We can use ``kzalloc``, you should prefer ``kunit_kzalloc`` and KUnit will
+ensure that the memory is freed once the test completes.
 
-In order to run KUnit tests on one of these architectures via QEMU with the
-KUnit wrapper, all you need to do is specify the flags ``--arch`` and
-``--cross_compile`` when invoking the KUnit Wrapper. For example, we could run
-the default KUnit tests on ARM in the following manner (assuming we have an ARM
-toolchain installed):
+This is useful because it lets us use the ``KUNIT_ASSERT_EQ`` macros to exit
+early from a test without having to worry about remembering to call ``kfree``.
+For example:
 
-.. code-block:: bash
+.. code-block:: c
 
-	tools/testing/kunit/kunit.py run --timeout=60 --jobs=12 --arch=arm --cross_compile=arm-linux-gnueabihf-
+	void example_test_allocation(struct kunit *test)
+	{
+		char *buffer = kunit_kzalloc(test, 16, GFP_KERNEL);
+		/* Ensure allocation succeeded. */
+		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
 
-Alternatively, if you want to run your tests on real hardware or in some other
-emulation environment, all you need to do is to take your kunitconfig, your
-Kconfig options for the tests you would like to run, and merge them into
-whatever config your are using for your platform. That's it!
+		KUNIT_ASSERT_STREQ(test, buffer, "");
+	}
 
-For example, let's say you have the following kunitconfig:
 
-.. code-block:: none
+Testing Static Functions
+------------------------
 
-	CONFIG_KUNIT=y
-	CONFIG_KUNIT_EXAMPLE_TEST=y
+If we do not want to expose functions or variables for testing, one option is to
+conditionally ``#include`` the test file at the end of your .c file. For
+example:
 
-If you wanted to run this test on an x86 VM, you might add the following config
-options to your ``.config``:
+.. code-block:: c
 
-.. code-block:: none
+	/* In my_file.c */
 
-	CONFIG_KUNIT=y
-	CONFIG_KUNIT_EXAMPLE_TEST=y
-	CONFIG_SERIAL_8250=y
-	CONFIG_SERIAL_8250_CONSOLE=y
+	static int do_interesting_thing();
 
-All these new options do is enable support for a common serial console needed
-for logging.
+	#ifdef CONFIG_MY_KUNIT_TEST
+	#include "my_kunit_test.c"
+	#endif
 
-Next, you could build a kernel with these tests as follows:
+Injecting Test-Only Code
+------------------------
 
+Similar to as shown above, we can add test-specific logic. For example:
 
-.. code-block:: bash
+.. code-block:: c
 
-	make ARCH=x86 olddefconfig
-	make ARCH=x86
+	/* In my_file.h */
 
-Once you have built a kernel, you could run it on QEMU as follows:
+	#ifdef CONFIG_MY_KUNIT_TEST
+	/* Defined in my_kunit_test.c */
+	void test_only_hook(void);
+	#else
+	void test_only_hook(void) { }
+	#endif
 
-.. code-block:: bash
+This test-only code can be made more useful by accessing the current ``kunit_test``
+as shown in next section: *Accessing The Current Test*.
 
-	qemu-system-x86_64 -enable-kvm \
-			   -m 1024 \
-			   -kernel arch/x86_64/boot/bzImage \
-			   -append 'console=ttyS0' \
-			   --nographic
+Accessing The Current Test
+--------------------------
 
-Interspersed in the kernel logs you might see the following:
+In some cases, we need to call test-only code from outside the test file.
+For example, see example in section *Injecting Test-Only Code* or if
+we are providing a fake implementation of an ops struct. Using
+``kunit_test`` field in ``task_struct``, we can access it via
+``current->kunit_test``.
 
-.. code-block:: none
+Below example includes how to implement "mocking":
 
-	TAP version 14
-		# Subtest: example
-		1..1
-		# example_simple_test: initializing
-		ok 1 - example_simple_test
-	ok 1 - example
+.. code-block:: c
 
-Congratulations, you just ran a KUnit test on the x86 architecture!
+	#include <linux/sched.h> /* for current */
 
-In a similar manner, kunit and kunit tests can also be built as modules,
-so if you wanted to run tests in this way you might add the following config
-options to your ``.config``:
+	struct test_data {
+		int foo_result;
+		int want_foo_called_with;
+	};
 
-.. code-block:: none
+	static int fake_foo(int arg)
+	{
+		struct kunit *test = current->kunit_test;
+		struct test_data *test_data = test->priv;
 
-	CONFIG_KUNIT=m
-	CONFIG_KUNIT_EXAMPLE_TEST=m
+		KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg);
+		return test_data->foo_result;
+	}
 
-Once the kernel is built and installed, a simple
+	static void example_simple_test(struct kunit *test)
+	{
+		/* Assume priv is allocated in the suite's .init */
+		struct test_data *test_data = test->priv;
 
-.. code-block:: bash
+		test_data->foo_result = 42;
+		test_data->want_foo_called_with = 1;
 
-	modprobe example-test
+		/* In a real test, we'd probably pass a pointer to fake_foo somewhere
+		 * like an ops struct, etc. instead of calling it directly. */
+		KUNIT_EXPECT_EQ(test, fake_foo(1), 42);
+	}
 
-...will run the tests.
 
-.. note::
-   Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig
-   if the test does not support module build.  Otherwise, it will trigger
-   compile errors if ``CONFIG_KUNIT`` is ``m``.
+Note: here we are able to get away with using ``test->priv``, but if we want
+something more flexible we could use a named ``kunit_resource``, see
+Documentation/dev-tools/kunit/api/test.rst.
 
-Writing new tests for other architectures
------------------------------------------
+Failing The Current Test
+------------------------
 
-The first thing you must do is ask yourself whether it is necessary to write a
-KUnit test for a specific architecture, and then whether it is necessary to
-write that test for a particular piece of hardware. In general, writing a test
-that depends on having access to a particular piece of hardware or software (not
-included in the Linux source repo) should be avoided at all costs.
+If we want to fail the current test, we can use ``kunit_fail_current_test(fmt, args...)``
+which is defined in ``<kunit/test-bug.h>`` and does not require pulling in ``<kunit/test.h>``.
+For example, we have an option to enable some extra debug checks on some data
+structures as shown below:
 
-Even if you only ever plan on running your KUnit test on your hardware
-configuration, other people may want to run your tests and may not have access
-to your hardware. If you write your test to run on UML, then anyone can run your
-tests without knowing anything about your particular setup, and you can still
-run your tests on your hardware setup just by compiling for your architecture.
+.. code-block:: c
 
-.. important::
-   Always prefer tests that run on UML to tests that only run under a particular
-   architecture, and always prefer tests that run under QEMU or another easy
-   (and monetarily free) to obtain software environment to a specific piece of
-   hardware.
-
-Nevertheless, there are still valid reasons to write an architecture or hardware
-specific test: for example, you might want to test some code that really belongs
-in ``arch/some-arch/*``. Even so, try your best to write the test so that it
-does not depend on physical hardware: if some of your test cases don't need the
-hardware, only require the hardware for tests that actually need it.
-
-Now that you have narrowed down exactly what bits are hardware specific, the
-actual procedure for writing and running the tests is pretty much the same as
-writing normal KUnit tests. One special caveat is that you have to reset
-hardware state in between test cases; if this is not possible, you may only be
-able to run one test case per invocation.
+	#include <kunit/test-bug.h>
 
-.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
-   dependent KUnit test.
+	#ifdef CONFIG_EXTRA_DEBUG_CHECKS
+	static void validate_my_data(struct data *data)
+	{
+		if (is_valid(data))
+			return;
 
-KUnit debugfs representation
-============================
-When kunit test suites are initialized, they create an associated directory
-in ``/sys/kernel/debug/kunit/<test-suite>``.  The directory contains one file
+		kunit_fail_current_test("data %p is invalid", data);
 
-- results: "cat results" displays results of each test case and the results
-  of the entire suite for the last test run.
+		/* Normal, non-KUnit, error reporting code here. */
+	}
+	#else
+	static void my_debug_function(void) { }
+	#endif
 
-The debugfs representation is primarily of use when kunit test suites are
-run in a native environment, either as modules or builtin.  Having a way
-to display results like this is valuable as otherwise results can be
-intermixed with other events in dmesg output.  The maximum size of each
-results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``).
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
                   ` (4 preceding siblings ...)
  2021-12-07  5:40 ` [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  2021-12-07 18:46   ` Tim.Bird
  2021-12-07  5:40 ` [PATCH v2 7/7] Documentation: KUnit: Restyled Frequently Asked Questions Harinder Singh
  6 siblings, 1 reply; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Rewrite page to enhance content consistency.

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/style.rst | 101 ++++++++++++------------
 1 file changed, 49 insertions(+), 52 deletions(-)

diff --git a/Documentation/dev-tools/kunit/style.rst b/Documentation/dev-tools/kunit/style.rst
index 8dbcdc552606..8fae192cae28 100644
--- a/Documentation/dev-tools/kunit/style.rst
+++ b/Documentation/dev-tools/kunit/style.rst
@@ -4,37 +4,36 @@
 Test Style and Nomenclature
 ===========================
 
-To make finding, writing, and using KUnit tests as simple as possible, it's
+To make finding, writing, and using KUnit tests as simple as possible, it is
 strongly encouraged that they are named and written according to the guidelines
-below. While it's possible to write KUnit tests which do not follow these rules,
+below. While it is possible to write KUnit tests which do not follow these rules,
 they may break some tooling, may conflict with other tests, and may not be run
 automatically by testing systems.
 
-It's recommended that you only deviate from these guidelines when:
+It is recommended that you only deviate from these guidelines when:
 
-1. Porting tests to KUnit which are already known with an existing name, or
-2. Writing tests which would cause serious problems if automatically run (e.g.,
-   non-deterministically producing false positives or negatives, or taking an
-   extremely long time to run).
+1. Porting tests to KUnit which are already known with an existing name.
+2. Writing tests which would cause serious problems if automatically run. For
+   example, non-deterministically producing false positives or negatives, or
+   taking a long time to run.
 
 Subsystems, Suites, and Tests
 =============================
 
-In order to make tests as easy to find as possible, they're grouped into suites
-and subsystems. A test suite is a group of tests which test a related area of
-the kernel, and a subsystem is a set of test suites which test different parts
-of the same kernel subsystem or driver.
+To make tests easy to find, they are grouped into suites and subsystems. A test
+suite is a group of tests which test a related area of the kernel. A subsystem
+is a set of test suites which test different parts of a kernel subsystem
+or a driver.
 
 Subsystems
 ----------
 
 Every test suite must belong to a subsystem. A subsystem is a collection of one
 or more KUnit test suites which test the same driver or part of the kernel. A
-rule of thumb is that a test subsystem should match a single kernel module. If
-the code being tested can't be compiled as a module, in many cases the subsystem
-should correspond to a directory in the source tree or an entry in the
-MAINTAINERS file. If unsure, follow the conventions set by tests in similar
-areas.
+test subsystem should match a single kernel module. If the code being tested
+cannot be compiled as a module, in many cases the subsystem should correspond to
+a directory in the source tree or an entry in the ``MAINTAINERS`` file. If
+unsure, follow the conventions set by tests in similar areas.
 
 Test subsystems should be named after the code being tested, either after the
 module (wherever possible), or after the directory or files being tested. Test
@@ -42,9 +41,8 @@ subsystems should be named to avoid ambiguity where necessary.
 
 If a test subsystem name has multiple components, they should be separated by
 underscores. *Do not* include "test" or "kunit" directly in the subsystem name
-unless you are actually testing other tests or the kunit framework itself.
-
-Example subsystems could be:
+unless we are actually testing other tests or the kunit framework itself. For
+example, subsystems could be called:
 
 ``ext4``
   Matches the module and filesystem name.
@@ -56,13 +54,13 @@ Example subsystems could be:
   Has several components (``snd``, ``hda``, ``codec``, ``hdmi``) separated by
   underscores. Matches the module name.
 
-Avoid names like these:
+Avoid names as shown in examples below:
 
 ``linear-ranges``
   Names should use underscores, not dashes, to separate words. Prefer
   ``linear_ranges``.
 ``qos-kunit-test``
-  As well as using underscores, this name should not have "kunit-test" as a
+  This name should not use underscores, not have "kunit-test" as a
   suffix, and ``qos`` is ambiguous as a subsystem name. ``power_qos`` would be a
   better name.
 ``pc_parallel_port``
@@ -70,34 +68,32 @@ Avoid names like these:
   be named ``parport_pc``.
 
 .. note::
-        The KUnit API and tools do not explicitly know about subsystems. They're
-        simply a way of categorising test suites and naming modules which
-        provides a simple, consistent way for humans to find and run tests. This
-        may change in the future, though.
+        The KUnit API and tools do not explicitly know about subsystems. They are
+        a way of categorising test suites and naming modules which provides a
+        simple, consistent way for humans to find and run tests. This may change
+        in the future.
 
 Suites
 ------
 
 KUnit tests are grouped into test suites, which cover a specific area of
 functionality being tested. Test suites can have shared initialisation and
-shutdown code which is run for all tests in the suite.
-Not all subsystems will need to be split into multiple test suites (e.g. simple drivers).
+shutdown code which is run for all tests in the suite. Not all subsystems need
+to be split into multiple test suites (for example, simple drivers).
 
 Test suites are named after the subsystem they are part of. If a subsystem
 contains several suites, the specific area under test should be appended to the
 subsystem name, separated by an underscore.
 
 In the event that there are multiple types of test using KUnit within a
-subsystem (e.g., both unit tests and integration tests), they should be put into
-separate suites, with the type of test as the last element in the suite name.
-Unless these tests are actually present, avoid using ``_test``, ``_unittest`` or
-similar in the suite name.
+subsystem (for example, both unit tests and integration tests), they should be
+put into separate suites, with the type of test as the last element in the suite
+name. Unless these tests are actually present, avoid using ``_test``, ``_unittest``
+or similar in the suite name.
 
 The full test suite name (including the subsystem name) should be specified as
 the ``.name`` member of the ``kunit_suite`` struct, and forms the base for the
-module name (see below).
-
-Example test suites could include:
+module name. For example, test suites could include:
 
 ``ext4_inode``
   Part of the ``ext4`` subsystem, testing the ``inode`` area.
@@ -109,26 +105,27 @@ Example test suites could include:
   The ``kasan`` subsystem has only one suite, so the suite name is the same as
   the subsystem name.
 
-Avoid names like:
+Avoid names, for example:
 
 ``ext4_ext4_inode``
-  There's no reason to state the subsystem twice.
+  There is no reason to state the subsystem twice.
 ``property_entry``
   The suite name is ambiguous without the subsystem name.
 ``kasan_integration_test``
   Because there is only one suite in the ``kasan`` subsystem, the suite should
-  just be called ``kasan``. There's no need to redundantly add
-  ``integration_test``. Should a separate test suite with, for example, unit
-  tests be added, then that suite could be named ``kasan_unittest`` or similar.
+  just be called as ``kasan``. Do not redundantly add
+  ``integration_test``. It should be a separate test suite. For example, if the
+  unit tests are added, then that suite could be named as ``kasan_unittest`` or
+  similar.
 
 Test Cases
 ----------
 
 Individual tests consist of a single function which tests a constrained
-codepath, property, or function. In the test output, individual tests' results
-will show up as subtests of the suite's results.
+codepath, property, or function. In the test output, an individual test's
+results will show up as subtests of the suite's results.
 
-Tests should be named after what they're testing. This is often the name of the
+Tests should be named after what they are testing. This is often the name of the
 function being tested, with a description of the input or codepath being tested.
 As tests are C functions, they should be named and written in accordance with
 the kernel coding style.
@@ -136,7 +133,7 @@ the kernel coding style.
 .. note::
         As tests are themselves functions, their names cannot conflict with
         other C identifiers in the kernel. This may require some creative
-        naming. It's a good idea to make your test functions `static` to avoid
+        naming. It is a good idea to make your test functions `static` to avoid
         polluting the global namespace.
 
 Example test names include:
@@ -150,7 +147,7 @@ Example test names include:
 
 Should it be necessary to refer to a test outside the context of its test suite,
 the *fully-qualified* name of a test should be the suite name followed by the
-test name, separated by a colon (i.e. ``suite:test``).
+test name, separated by a colon (``suite:test``).
 
 Test Kconfig Entries
 ====================
@@ -162,16 +159,16 @@ This Kconfig entry must:
 * be named ``CONFIG_<name>_KUNIT_TEST``: where <name> is the name of the test
   suite.
 * be listed either alongside the config entries for the driver/subsystem being
-  tested, or be under [Kernel Hacking]→[Kernel Testing and Coverage]
-* depend on ``CONFIG_KUNIT``
+  tested, or be under [Kernel Hacking]->[Kernel Testing and Coverage]
+* depend on ``CONFIG_KUNIT``.
 * be visible only if ``CONFIG_KUNIT_ALL_TESTS`` is not enabled.
 * have a default value of ``CONFIG_KUNIT_ALL_TESTS``.
-* have a brief description of KUnit in the help text
+* have a brief description of KUnit in the help text.
 
-Unless there's a specific reason not to (e.g. the test is unable to be built as
-a module), Kconfig entries for tests should be tristate.
+If we are not able to meet above conditions (for example, the test is unable to
+be built as a module), Kconfig entries for tests should be tristate.
 
-An example Kconfig entry:
+For example, a Kconfig entry might look like:
 
 .. code-block:: none
 
@@ -182,8 +179,8 @@ An example Kconfig entry:
 		help
 		  This builds unit tests for foo.
 
-		  For more information on KUnit and unit tests in general, please refer
-		  to the KUnit documentation in Documentation/dev-tools/kunit/.
+		  For more information on KUnit and unit tests in general,
+		  please refer to the KUnit documentation in Documentation/dev-tools/kunit/.
 
 		  If unsure, say N.
 
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 7/7] Documentation: KUnit: Restyled Frequently Asked Questions
  2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
                   ` (5 preceding siblings ...)
  2021-12-07  5:40 ` [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page Harinder Singh
@ 2021-12-07  5:40 ` Harinder Singh
  6 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-07  5:40 UTC (permalink / raw)
  To: davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel, tim.bird,
	Harinder Singh

Reword to align with other chapters.

Signed-off-by: Harinder Singh <sharinder@google.com>
---
 Documentation/dev-tools/kunit/faq.rst | 73 +++++++++++++--------------
 1 file changed, 36 insertions(+), 37 deletions(-)

diff --git a/Documentation/dev-tools/kunit/faq.rst b/Documentation/dev-tools/kunit/faq.rst
index 5c6555d020f3..172e239791a8 100644
--- a/Documentation/dev-tools/kunit/faq.rst
+++ b/Documentation/dev-tools/kunit/faq.rst
@@ -4,56 +4,55 @@
 Frequently Asked Questions
 ==========================
 
-How is this different from Autotest, kselftest, etc?
-====================================================
+How is this different from Autotest, kselftest, and so on?
+==========================================================
 KUnit is a unit testing framework. Autotest, kselftest (and some others) are
 not.
 
 A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is supposed to
-test a single unit of code in isolation, hence the name. A unit test should be
-the finest granularity of testing and as such should allow all possible code
-paths to be tested in the code under test; this is only possible if the code
-under test is very small and does not have any external dependencies outside of
+test a single unit of code in isolation and hence the name *unit test*. A unit
+test should be the finest granularity of testing and should allow all possible
+code paths to be tested in the code under test. This is only possible if the
+code under test is small and does not have any external dependencies outside of
 the test's control like hardware.
 
 There are no testing frameworks currently available for the kernel that do not
-require installing the kernel on a test machine or in a VM and all require
-tests to be written in userspace and run on the kernel under test; this is true
-for Autotest, kselftest, and some others, disqualifying any of them from being
-considered unit testing frameworks.
+require installing the kernel on a test machine or in a virtual machine. All
+testing frameworks require tests to be written in userspace and run on the
+kernel under test. This is true for Autotest, kselftest, and some others,
+disqualifying any of them from being considered unit testing frameworks.
 
 Does KUnit support running on architectures other than UML?
 ===========================================================
 
-Yes, well, mostly.
+Yes, mostly.
 
-For the most part, the KUnit core framework (what you use to write the tests)
-can compile to any architecture; it compiles like just another part of the
+For the most part, the KUnit core framework (what we use to write the tests)
+can compile to any architecture. It compiles like just another part of the
 kernel and runs when the kernel boots, or when built as a module, when the
-module is loaded.  However, there is some infrastructure,
-like the KUnit Wrapper (``tools/testing/kunit/kunit.py``) that does not support
-other architectures.
+module is loaded.  However, there is infrastructure, like the KUnit Wrapper
+(``tools/testing/kunit/kunit.py``) that does not support other architectures.
 
-In short, this means that, yes, you can run KUnit on other architectures, but
-it might require more work than using KUnit on UML.
+In short, yes, you can run KUnit on other architectures, but it might require
+more work than using KUnit on UML.
 
 For more information, see :ref:`kunit-on-non-uml`.
 
-What is the difference between a unit test and these other kinds of tests?
-==========================================================================
+What is the difference between a unit test and other kinds of tests?
+====================================================================
 Most existing tests for the Linux kernel would be categorized as an integration
 test, or an end-to-end test.
 
-- A unit test is supposed to test a single unit of code in isolation, hence the
-  name. A unit test should be the finest granularity of testing and as such
-  should allow all possible code paths to be tested in the code under test; this
-  is only possible if the code under test is very small and does not have any
-  external dependencies outside of the test's control like hardware.
+- A unit test is supposed to test a single unit of code in isolation. A unit
+  test should be the finest granularity of testing and, as such, allows all
+  possible code paths to be tested in the code under test. This is only possible
+  if the code under test is small and does not have any external dependencies
+  outside of the test's control like hardware.
 - An integration test tests the interaction between a minimal set of components,
   usually just two or three. For example, someone might write an integration
   test to test the interaction between a driver and a piece of hardware, or to
   test the interaction between the userspace libraries the kernel provides and
-  the kernel itself; however, one of these tests would probably not test the
+  the kernel itself. However, one of these tests would probably not test the
   entire kernel along with hardware interactions and interactions with the
   userspace.
 - An end-to-end test usually tests the entire system from the perspective of the
@@ -62,26 +61,26 @@ test, or an end-to-end test.
   hardware with a production userspace and then trying to exercise some behavior
   that depends on interactions between the hardware, the kernel, and userspace.
 
-KUnit isn't working, what should I do?
-======================================
+KUnit is not working, what should I do?
+=======================================
 
 Unfortunately, there are a number of things which can break, but here are some
 things to try.
 
-1. Try running ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output``
+1. Run ``./tools/testing/kunit/kunit.py run`` with the ``--raw_output``
    parameter. This might show details or error messages hidden by the kunit_tool
    parser.
 2. Instead of running ``kunit.py run``, try running ``kunit.py config``,
    ``kunit.py build``, and ``kunit.py exec`` independently. This can help track
    down where an issue is occurring. (If you think the parser is at fault, you
-   can run it manually against stdin or a file with ``kunit.py parse``.)
-3. Running the UML kernel directly can often reveal issues or error messages
-   kunit_tool ignores. This should be as simple as running ``./vmlinux`` after
-   building the UML kernel (e.g., by using ``kunit.py build``). Note that UML
-   has some unusual requirements (such as the host having a tmpfs filesystem
-   mounted), and has had issues in the past when built statically and the host
-   has KASLR enabled. (On older host kernels, you may need to run ``setarch
-   `uname -m` -R ./vmlinux`` to disable KASLR.)
+   can run it manually against ``stdin`` or a file with ``kunit.py parse``.)
+3. Running the UML kernel directly can often reveal issues or error messages,
+   ``kunit_tool`` ignores. This should be as simple as running ``./vmlinux``
+   after building the UML kernel (for example, by using ``kunit.py build``).
+   Note that UML has some unusual requirements (such as the host having a tmpfs
+   filesystem mounted), and has had issues in the past when built statically and
+   the host has KASLR enabled. (On older host kernels, you may need to run
+   ``setarch `uname -m` -R ./vmlinux`` to disable KASLR.)
 4. Make sure the kernel .config has ``CONFIG_KUNIT=y`` and at least one test
    (e.g. ``CONFIG_KUNIT_EXAMPLE_TEST=y``). kunit_tool will keep its .config
    around, so you can see what config was used after running ``kunit.py run``.
-- 
2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 1/7] Documentation: KUnit: Rewrite main page
  2021-12-07  5:40 ` [PATCH v2 1/7] Documentation: KUnit: Rewrite main page Harinder Singh
@ 2021-12-07 17:11   ` Tim.Bird
  2021-12-10  5:30     ` Harinder Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-07 17:11 UTC (permalink / raw)
  To: sharinder, davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel

See one additional suggestion below.
 -- Tim


> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> Add a section on advantages of unit testing, how to write unit tests,
> KUnit features and Prerequisites.
> 
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---
>  Documentation/dev-tools/kunit/index.rst | 166 +++++++++++++-----------
>  1 file changed, 88 insertions(+), 78 deletions(-)
> 
> diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> index cacb35ec658d..ebf4bffaa1ca 100644
> --- a/Documentation/dev-tools/kunit/index.rst
> +++ b/Documentation/dev-tools/kunit/index.rst
> @@ -1,11 +1,12 @@
>  .. SPDX-License-Identifier: GPL-2.0
> 
> -=========================================
> -KUnit - Unit Testing for the Linux Kernel
> -=========================================
> +=================================
> +KUnit - Linux Kernel Unit Testing
> +=================================
> 
>  .. toctree::
>  	:maxdepth: 2
> +	:caption: Contents:
> 
>  	start
>  	usage
> @@ -16,82 +17,91 @@ KUnit - Unit Testing for the Linux Kernel
>  	tips
>  	running_tips
> 
> -What is KUnit?
> -==============
> -
> -KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
> -
> -KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> -Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
> -cases, grouping related test cases into test suites, providing common
> -infrastructure for running tests, and much more.
> -
> -KUnit consists of a kernel component, which provides a set of macros for easily
> -writing unit tests. Tests written against KUnit will run on kernel boot if
> -built-in, or when loaded if built as a module. These tests write out results to
> -the kernel log in `TAP <https://testanything.org/>`_ format.
> -
> -To make running these tests (and reading the results) easier, KUnit offers
> -:doc:`kunit_tool <kunit-tool>`, which builds a `User Mode Linux
> -<http://user-mode-linux.sourceforge.net>`_ kernel, runs it, and parses the test
> -results. This provides a quick way of running KUnit tests during development,
> -without requiring a virtual machine or separate hardware.
> -
> -Get started now: Documentation/dev-tools/kunit/start.rst
> -
> -Why KUnit?
> -==========
> -
> -A unit test is supposed to test a single unit of code in isolation, hence the
> -name. A unit test should be the finest granularity of testing and as such should
> -allow all possible code paths to be tested in the code under test; this is only
> -possible if the code under test is very small and does not have any external
> -dependencies outside of the test's control like hardware.
> -
> -KUnit provides a common framework for unit tests within the kernel.
> -
> -KUnit tests can be run on most architectures, and most tests are architecture
> -independent. All built-in KUnit tests run on kernel startup.  Alternatively,
> -KUnit and KUnit tests can be built as modules and tests will run when the test
> -module is loaded.
> -
> -.. note::
> -
> -        KUnit can also run tests without needing a virtual machine or actual
> -        hardware under User Mode Linux. User Mode Linux is a Linux architecture,
> -        like ARM or x86, which compiles the kernel as a Linux executable. KUnit
> -        can be used with UML either by building with ``ARCH=um`` (like any other
> -        architecture), or by using :doc:`kunit_tool <kunit-tool>`.
> -
> -KUnit is fast. Excluding build time, from invocation to completion KUnit can run
> -several dozen tests in only 10 to 20 seconds; this might not sound like a big
> -deal to some people, but having such fast and easy to run tests fundamentally
> -changes the way you go about testing and even writing code in the first place.
> -Linus himself said in his `git talk at Google
> -<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
> -
> -	"... a lot of people seem to think that performance is about doing the
> -	same thing, just doing it faster, and that is not true. That is not what
> -	performance is all about. If you can do something really fast, really
> -	well, people will start using it differently."
> -
> -In this context Linus was talking about branching and merging,
> -but this point also applies to testing. If your tests are slow, unreliable, are
> -difficult to write, and require a special setup or special hardware to run,
> -then you wait a lot longer to write tests, and you wait a lot longer to run
> -tests; this means that tests are likely to break, unlikely to test a lot of
> -things, and are unlikely to be rerun once they pass. If your tests are really
> -fast, you run them all the time, every time you make a change, and every time
> -someone sends you some code. Why trust that someone ran all their tests
> -correctly on every change when you can just run them yourself in less time than
> -it takes to read their test log?
> +This section details the kernel unit testing framework.
> +
> +Introduction
> +============
> +
> +KUnit (Kernel unit testing framework) provides a common framework for
> +unit tests within the Linux kernel. Using KUnit, you can define groups
> +of test cases called test suites. The tests either run on kernel boot
> +if built-in, or load as a module. KUnit automatically flags and reports
> +failed test cases in the kernel log. The test results appear in `TAP
> +(Test Anything Protocol) format <https://testanything.org/>`_. It is inspired by
> +JUnit, Python’s unittest.mock, and GoogleTest/GoogleMock (C++ unit testing
> +framework).
> +
> +KUnit tests are part of the kernel, written in the C (programming)
> +language, and test parts of the Kernel implementation (example: a C
> +language function). Excluding build time, from invocation to
> +completion, KUnit can run around 100 tests in less than 10 seconds.
> +KUnit can test any kernel component, for example: file system, system
> +calls, memory management, device drivers and so on.
> +
> +KUnit follows the white-box testing approach. The test has access to
> +internal system functionality. KUnit runs in kernel space and is not
> +restricted to things exposed to user-space.
> +
> +In addition, KUnit has kunit_tool, a script (``tools/testing/kunit/kunit.py``)
> +that configures the Linux kernel, runs KUnit tests under QEMU or UML (`User Mode
> +Linux <http://user-mode-linux.sourceforge.net/>`_), parses the test results and
> +displays them in a user friendly manner.
> +
> +Features
> +--------
> +
> +- Provides a framework for writing unit tests.
> +- Runs tests on any kernel architecture.
> +- Runs a test in milliseconds.
> +
> +Prerequisites
> +-------------
> +
> +- Any Linux kernel compatible hardware.
> +- For Kernel under test, Linux kernel version 5.5 or greater.
> +
> +Unit Testing
> +============
> +
> +A unit test tests a single unit of code in isolation. A unit test is the finest
> +granularity of testing and allows all possible code paths to be tested in the
> +code under test. This is possible if the code under test is small and does not
> +have any external dependencies outside of the test's control like hardware.
> +
> +
> +Write Unit Tests
> +----------------
> +
> +To write good unit tests, there is a simple but powerful pattern:
> +Arrange-Act-Assert. This is a great way to structure test cases and
> +defines an order of operations.
> +
> +- Arrange inputs and targets: At the start of the test, arrange the data
> +  that allows a function to work. Example: initialize a statement or
> +  object.
> +- Act on the target behavior: Call your function/code under test.
> +- Assert expected outcome: Verify the result (or resulting state) as expected
> +  or not.

Verify the result (or resulting state) as expected or not ->
   Verify that the result (or resulting state) is as expected or not


> +
> +Unit Testing Advantages
> +-----------------------
> +
> +- Increases testing speed and development in the long run.
> +- Detects bugs at initial stage and therefore decreases bug fix cost
> +  compared to acceptance testing.
> +- Improves code quality.
> +- Encourages writing testable code.
> 
>  How do I use it?
>  ================
> 
> -*   Documentation/dev-tools/kunit/start.rst - for new users of KUnit
> -*   Documentation/dev-tools/kunit/tips.rst - for short examples of best practices
> -*   Documentation/dev-tools/kunit/usage.rst - for a more detailed explanation of KUnit features
> -*   Documentation/dev-tools/kunit/api/index.rst - for the list of KUnit APIs used for testing
> -*   Documentation/dev-tools/kunit/kunit-tool.rst - for more information on the kunit_tool helper script
> -*   Documentation/dev-tools/kunit/faq.rst - for answers to some common questions about KUnit
> +*   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
> +*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> +*   Documentation/dev-tools/kunit/tips.rst - best practices with
> +    examples.
> +*   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> +    used for testing.
> +*   Documentation/dev-tools/kunit/kunit-tool.rst - kunit_tool helper
> +    script.
> +*   Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
> +    answers.
> --
> 2.34.1.400.ga245620fadb-goog


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture
  2021-12-07  5:40 ` [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture Harinder Singh
@ 2021-12-07 17:24   ` Tim.Bird
  2021-12-10  5:31     ` Harinder Singh
  2021-12-10 23:08   ` Marco Elver
  1 sibling, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-07 17:24 UTC (permalink / raw)
  To: sharinder, davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel

> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> Describe the components of KUnit and how the kernel mode parts
> interact with kunit_tool.
> 
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---
>  .../dev-tools/kunit/architecture.rst          | 206 ++++++++++++++++++
>  Documentation/dev-tools/kunit/index.rst       |   2 +
>  .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
>  Documentation/dev-tools/kunit/start.rst       |   1 +
>  4 files changed, 209 insertions(+)
>  create mode 100644 Documentation/dev-tools/kunit/architecture.rst
>  create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
> 
> diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
> new file mode 100644
> index 000000000000..bb0fb3e3ed01
> --- /dev/null
> +++ b/Documentation/dev-tools/kunit/architecture.rst
> @@ -0,0 +1,206 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==================
> +KUnit Architecture
> +==================
> +
> +The KUnit architecture can be divided into two parts:
> +
> +- Kernel testing library
> +- kunit_tool (Command line test harness)
> +
> +In-Kernel Testing Framework
> +===========================
> +
> +The kernel testing library supports KUnit tests written in C using
> +KUnit. KUnit tests are kernel code. KUnit does several things:
> +
> +- Organizes tests
> +- Reports test results
> +- Provides test utilities
> +
> +Test Cases
> +----------
> +
> +The fundamental unit in KUnit is the test case. The KUnit test cases are
> +grouped into KUnit suites. A KUnit test case is a function with type
> +signature ``void (*)(struct kunit *test)``.
> +These test case functions are wrapped in a struct called
> +``struct kunit_case``. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/include/kunit/test.h#L145
> +
> +It includes:
> +
> +- ``run_case``: the function implementing the actual test case.
> +- ``name``: the test case name.
> +- ``generate_params``: the parameterized tests generator function. This
> +  is optional for non-parameterized tests.
> +
> +Each KUnit test case gets a ``struct kunit`` context
> +object passed to it that tracks a running test. The KUnit assertion
> +macros and other KUnit utilities use the ``struct kunit`` context
> +object. As an exception, there are two fields:
> +
> +- ``->priv``: The setup functions can use it to store arbitrary test
> +  user data.
> +
> +- ``->param_value``: It contains the parameter value which can be
> +  retrieved in the parameterized tests.
> +
> +Test Suites
> +-----------
> +
> +A KUnit suite includes a collection of test cases. The KUnit suites
> +are represented by the ``struct kunit_suite``. For example:
> +
> +.. code-block:: c
> +
> +	static struct kunit_case example_test_cases[] = {
> +		KUNIT_CASE(example_test_foo),
> +		KUNIT_CASE(example_test_bar),
> +		KUNIT_CASE(example_test_baz),
> +		{}
> +	};
> +
> +	static struct kunit_suite example_test_suite = {
> +		.name = "example",
> +		.init = example_test_init,
> +		.exit = example_test_exit,
> +		.test_cases = example_test_cases,
> +	};
> +	kunit_test_suite(example_test_suite);
> +
> +In the above example, the test suite ``example_test_suite``, runs the
> +test cases ``example_test_foo``, ``example_test_bar``, and
> +``example_test_baz``. Before running the test, the ``example_test_init``
> +is called and after running the test, ``example_test_exit`` is called.
> +The ``kunit_test_suite(example_test_suite)`` registers the test suite
> +with the KUnit test framework.
> +
> +Executor
> +--------
> +
> +The KUnit executor can list and run built-in KUnit tests on boot.
> +The Test suites are stored in a linker section
> +called ``.kunit_test_suites``. For code, see:
> +https://elixir.bootlin.com/linux/v5.12/source/include/asm-generic/vmlinux.lds.h#L918.
> +The linker section consists of an array of pointers to
> +``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
> +macro. To run all tests compiled into the kernel, the KUnit executor
> +iterates over the linker section array.
> +
> +.. kernel-figure:: kunit_suitememorydiagram.png
> +	:alt:	KUnit Suite Memory
> +
> +	KUnit Suite Memory Diagram
> +
> +On the kernel boot, the KUnit executor uses the start and end addresses
> +of this section to iterate over and run all tests. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/lib/kunit/executor.c
> +
> +When built as a module, the ``kunit_test_suites()`` macro defines a
> +``module_init()`` function, which runs all the tests in the compilation
> +unit instead of utilizing the executor.
> +
> +In KUnit tests, some error classes do not affect other tests
> +or parts of the kernel, each KUnit case executes in a separate thread
> +context. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/lib/kunit/try-catch.c#L58
> +
> +Assertion Macros
> +----------------
> +
> +KUnit tests verify state using expectations/assertions.
> +All expectations/assertions are formatted as:
> +``KUNIT_{EXPECT|ASSERT}_<op>[_MSG](kunit, property[, message])``
> +
> +- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
> +  expectation.
> +
> +	- For an expectation, if the check fails, marks the test as failed
> +	  and logs the failure.
> +
> +	- An assertion, on failure, causes the test case to terminate
> +	  immediately.
> +
> +		- Assertions call function:
> +		  ``void __noreturn kunit_abort(struct kunit *)``.
> +
> +		- ``kunit_abort`` calls function:
> +		  ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
> +
> +		- ``kunit_try_catch_throw`` calls function:
> +		  ``void complete_and_exit(struct completion *, long) __noreturn;``
> +		  and terminates the special thread context.
> +
> +- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
> +  has the boolean value “true”), ``EQ`` (two supplied properties are
> +  equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
> +  contain an “err” value).
> +
> +- ``[_MSG]`` prints a custom message on failure.
> +
> +Test Result Reporting
> +---------------------
> +KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
> +https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
> +KTAP (yet to be standardized format) works with KUnit and Kselftest.
> +The KUnit executor prints KTAP results to dmesg, and debugfs
> +(if configured).
> +
> +Parameterized Tests
> +-------------------
> +
> +Each KUnit parameterized test is associated with a collection of
> +parameters. The test is invoked multiple times, once for each parameter
> +value and the parameter is stored in the ``param_value`` field.
> +The test case includes a ``KUNIT_CASE_PARAM()`` macro that accepts a
> +generator function.
> +The generator function returns the next parameter given to the

given to the -> given the

> +previous parameter in parameterized tests. It also provides a macro to
> +generate common-case generators based on arrays.
> +
> +For code, see:
> +https://elixir.bootlin.com/linux/v5.12/source/include/kunit/test.h#L1783

The rest looks OK, as far as I can tell.
 -- Tim


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests
  2021-12-07  5:40 ` [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests Harinder Singh
@ 2021-12-07 17:33   ` Tim.Bird
  2021-12-10  5:31     ` Harinder Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-07 17:33 UTC (permalink / raw)
  To: sharinder, davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel

> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> Consolidate documentation running tests into two pages: "run tests with
> kunit_tool" and "run tests without kunit_tool".
> 
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---
>  Documentation/dev-tools/kunit/index.rst       |   4 +
>  Documentation/dev-tools/kunit/run_manual.rst  |  57 ++++
>  Documentation/dev-tools/kunit/run_wrapper.rst | 247 ++++++++++++++++++
>  Documentation/dev-tools/kunit/start.rst       |   4 +-
>  4 files changed, 311 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/dev-tools/kunit/run_manual.rst
>  create mode 100644 Documentation/dev-tools/kunit/run_wrapper.rst
> 
> diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> index 75e4ae85adbb..c0d1fd749cd2 100644
> --- a/Documentation/dev-tools/kunit/index.rst
> +++ b/Documentation/dev-tools/kunit/index.rst
> @@ -10,6 +10,8 @@ KUnit - Linux Kernel Unit Testing
> 
>  	start
>  	architecture
> +	run_wrapper
> +	run_manual
>  	usage
>  	kunit-tool
>  	api/index
> @@ -98,6 +100,8 @@ How do I use it?
> 
>  *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
>  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> +*   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
> +*   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
>  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
>  *   Documentation/dev-tools/kunit/tips.rst - best practices with
>      examples.
> diff --git a/Documentation/dev-tools/kunit/run_manual.rst b/Documentation/dev-tools/kunit/run_manual.rst
> new file mode 100644
> index 000000000000..71e6d6623f88
> --- /dev/null
> +++ b/Documentation/dev-tools/kunit/run_manual.rst
> @@ -0,0 +1,57 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +============================
> +Run Tests without kunit_tool
> +============================
> +
> +If we do not want to use kunit_tool (For example: we want to integrate
> +with other systems, or run tests on real hardware), we can
> +include KUnit in any kernel, read out results, and parse manually.
> +
> +.. note:: KUnit is not designed for use in a production system. It is
> +          possible that tests may reduce the stability or security of
> +          the system.
> +
> +Configure the Kernel
> +====================
> +
> +KUnit tests can run without kunit_tool. This can be useful, if:
> +
> +- We have an existing kernel configuration to test.
> +- Need to run on real hardware (or using an emulator/VM kunit_tool
> +  does not support).
> +- Wish to integrate with some existing testing systems.
> +
> +KUnit is configured with the ``CONFIG_KUNIT`` option, and individual
> +tests can also be built by enabling their config options in our
> +``.config``. KUnit tests usually (but don't always) have config options
> +ending in ``_KUNIT_TEST``. Most tests can either be built as a module,
> +or be built into the kernel.
> +
> +.. note ::
> +
> +	We can enable the ``KUNIT_ALL_TESTS`` config option to
> +	automatically enable all tests with satisfied dependencies. This is
> +	a good way of quickly testing everything applicable to the current
> +	config.
> +
> +Once we have built our kernel (and/or modules), it is simple to run
> +the tests. If the tests are built-in, then will run automatically on the

then will run -> they will run
(or 'then they will run')

> +kernel boot. The results will be written to the kernel log (``dmesg``)
> +in TAP format.
> +

The rest looks OK to me.

You can add a 'Reviewed-by' for me if you want.
 -- Tim


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests
  2021-12-07  5:40 ` [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests Harinder Singh
@ 2021-12-07 18:28   ` Tim.Bird
  2021-12-10  5:31     ` Harinder Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-07 18:28 UTC (permalink / raw)
  To: sharinder, davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel

> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> We now have dedicated pages on running tests. Therefore refocus the
> usage page on writing tests and add content from tips page and
> information on other architectures.
> 
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---
>  Documentation/dev-tools/kunit/index.rst |   2 +-
>  Documentation/dev-tools/kunit/start.rst |   2 +-
>  Documentation/dev-tools/kunit/usage.rst | 570 ++++++++++--------------
>  3 files changed, 247 insertions(+), 327 deletions(-)
> 
> diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> index c0d1fd749cd2..76c9704d6a1a 100644
> --- a/Documentation/dev-tools/kunit/index.rst
> +++ b/Documentation/dev-tools/kunit/index.rst
> @@ -102,7 +102,7 @@ How do I use it?
>  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
>  *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
>  *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
> -*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> +*   Documentation/dev-tools/kunit/usage.rst - write tests.
>  *   Documentation/dev-tools/kunit/tips.rst - best practices with
>      examples.
>  *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
> index af13f443c976..a858ab009944 100644
> --- a/Documentation/dev-tools/kunit/start.rst
> +++ b/Documentation/dev-tools/kunit/start.rst
> @@ -243,7 +243,7 @@ Next Steps
>  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
>  *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
>  *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
> -*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> +*   Documentation/dev-tools/kunit/usage.rst - write tests.
>  *   Documentation/dev-tools/kunit/tips.rst - best practices with
>      examples.
>  *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst
> index 63f1bb89ebf5..b321877797f0 100644
> --- a/Documentation/dev-tools/kunit/usage.rst
> +++ b/Documentation/dev-tools/kunit/usage.rst
> @@ -1,57 +1,13 @@
>  .. SPDX-License-Identifier: GPL-2.0
> 
> -===========
> -Using KUnit
> -===========
> -
> -The purpose of this document is to describe what KUnit is, how it works, how it
> -is intended to be used, and all the concepts and terminology that are needed to
> -understand it. This guide assumes a working knowledge of the Linux kernel and
> -some basic knowledge of testing.
> -
> -For a high level introduction to KUnit, including setting up KUnit for your
> -project, see Documentation/dev-tools/kunit/start.rst.
> -
> -Organization of this document
> -=============================
> -
> -This document is organized into two main sections: Testing and Common Patterns.
> -The first covers what unit tests are and how to use KUnit to write them. The
> -second covers common testing patterns, e.g. how to isolate code and make it
> -possible to unit test code that was otherwise un-unit-testable.
> -
> -Testing
> -=======
> -
> -What is KUnit?
> ---------------
> -
> -"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> -Framework." KUnit is intended first and foremost for writing unit tests; it is
> -general enough that it can be used to write integration tests; however, this is
> -a secondary goal. KUnit has no ambition of being the only testing framework for
> -the kernel; for example, it does not intend to be an end-to-end testing
> -framework.
> -
> -What is Unit Testing?
> ----------------------
> -
> -A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
> -tests code at the smallest possible scope, a *unit* of code. In the C
> -programming language that's a function.
> -
> -Unit tests should be written for all the publicly exposed functions in a
> -compilation unit; so that is all the functions that are exported in either a
> -*class* (defined below) or all functions which are **not** static.
> -
>  Writing Tests
> --------------
> +=============
> 
>  Test Cases
> -~~~~~~~~~~
> +----------
> 
>  The fundamental unit in KUnit is the test case. A test case is a function with
> -the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
> +the signature ``void (*)(struct kunit *test)``. It calls the function under test
>  and then sets *expectations* for what should happen. For example:
> 
>  .. code-block:: c
> @@ -65,18 +21,19 @@ and then sets *expectations* for what should happen. For example:
>  		KUNIT_FAIL(test, "This test never passes.");
>  	}
> 
> -In the above example ``example_test_success`` always passes because it does
> -nothing; no expectations are set, so all expectations pass. On the other hand
> -``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
> -a special expectation that logs a message and causes the test case to fail.
> +In the above example, ``example_test_success`` always passes because it does
> +nothing; no expectations are set, and therefore all expectations pass. On the
> +other hand ``example_test_failure`` always fails because it calls ``KUNIT_FAIL``,
> +which is a special expectation that logs a message and causes the test case to
> +fail.
> 
>  Expectations
>  ~~~~~~~~~~~~
> -An *expectation* is a way to specify that you expect a piece of code to do
> -something in a test. An expectation is called like a function. A test is made
> -by setting expectations about the behavior of a piece of code under test; when
> -one or more of the expectations fail, the test case fails and information about
> -the failure is logged. For example:
> +An *expectation* specifies that we expect a piece of code to do something in a
> +test. An expectation is called like a function. A test is made by setting
> +expectations about the behavior of a piece of code under test. When one or more
> +expectations fail, the test case fails and information about the failure is
> +logged. For example:
> 
>  .. code-block:: c
> 
> @@ -86,29 +43,28 @@ the failure is logged. For example:
>  		KUNIT_EXPECT_EQ(test, 2, add(1, 1));
>  	}
> 
> -In the above example ``add_test_basic`` makes a number of assertions about the
> -behavior of a function called ``add``; the first parameter is always of type
> -``struct kunit *``, which contains information about the current test context;
> -the second parameter, in this case, is what the value is expected to be; the
> +In the above example, ``add_test_basic`` makes a number of assertions about the
> +behavior of a function called ``add``. The first parameter is always of type
> +``struct kunit *``, which contains information about the current test context.
> +The second parameter, in this case, is what the value is expected to be. The
>  last value is what the value actually is. If ``add`` passes all of these
>  expectations, the test case, ``add_test_basic`` will pass; if any one of these
>  expectations fails, the test case will fail.
> 
> -It is important to understand that a test case *fails* when any expectation is
> -violated; however, the test will continue running, potentially trying other
> -expectations until the test case ends or is otherwise terminated. This is as
> -opposed to *assertions* which are discussed later.
> +A test case *fails* when any expectation is violated; however, the test will
> +continue to run, and try other expectations until the test case ends or is
> +otherwise terminated. This is as opposed to *assertions* which are discussed
> +later.
> 
> -To learn about more expectations supported by KUnit, see
> -Documentation/dev-tools/kunit/api/test.rst.
> +To learn about more KUnit expectations, see Documentation/dev-tools/kunit/api/test.rst.
> 
>  .. note::
> -   A single test case should be pretty short, pretty easy to understand,
> -   focused on a single behavior.
> +   A single test case should be short, easy to understand, and focused on a
> +   single behavior.
> 
> -For example, if we wanted to properly test the add function above, we would
> -create additional tests cases which would each test a different property that an
> -add function should have like this:
> +For example, if we want to rigorously test the ``add`` function above, create
> +additional tests cases which would test each property that an ``add`` function
> +should have as shown below:
> 
>  .. code-block:: c
> 
> @@ -134,56 +90,43 @@ add function should have like this:
>  		KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
>  	}
> 
> -Notice how it is immediately obvious what all the properties that we are testing
> -for are.
> -
>  Assertions
>  ~~~~~~~~~~
> 
> -KUnit also has the concept of an *assertion*. An assertion is just like an
> -expectation except the assertion immediately terminates the test case if it is
> -not satisfied.
> -
> -For example:
> +An assertion is like an expectation, except that the assertion immediately
> +terminates the test case if the condition is not satisfied. For example:
> 
>  .. code-block:: c
> 
> -	static void mock_test_do_expect_default_return(struct kunit *test)
> +	static void test_sort(struct kunit *test)
>  	{
> -		struct mock_test_context *ctx = test->priv;
> -		struct mock *mock = ctx->mock;
> -		int param0 = 5, param1 = -5;
> -		const char *two_param_types[] = {"int", "int"};
> -		const void *two_params[] = {&param0, &param1};
> -		const void *ret;
> -
> -		ret = mock->do_expect(mock,
> -				      "test_printk", test_printk,
> -				      two_param_types, two_params,
> -				      ARRAY_SIZE(two_params));
> -		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> -		KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
> +		int *a, i, r = 1;
> +		a = kunit_kmalloc_array(test, TEST_LEN, sizeof(*a), GFP_KERNEL);
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a);
> +		for (i = 0; i < TEST_LEN; i++) {
> +			r = (r * 725861) % 6599;
> +			a[i] = r;
> +		}
> +		sort(a, TEST_LEN, sizeof(*a), cmpint, NULL);
> +		for (i = 0; i < TEST_LEN-1; i++)
> +			KUNIT_EXPECT_LE(test, a[i], a[i + 1]);
>  	}
> 
> -In this example, the method under test should return a pointer to a value, so
> -if the pointer returned by the method is null or an errno, we don't want to
> -bother continuing the test since the following expectation could crash the test
> -case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
> -the appropriate conditions have not been satisfied to complete the test.
> +In this example, the method under test should return pointer to a value. If the
> +pointer returns null or an errno, we want to stop the test since the following
> +expectation could crash the test case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us
> +to bail out of the test case if the appropriate conditions are not satisfied to
> +complete the test.
> 
>  Test Suites
>  ~~~~~~~~~~~
> 
> -Now obviously one unit test isn't very helpful; the power comes from having
> -many test cases covering all of a unit's behaviors. Consequently it is common
> -to have many *similar* tests; in order to reduce duplication in these closely
> -related tests most unit testing frameworks - including KUnit - provide the
> -concept of a *test suite*. A *test suite* is just a collection of test cases
> -for a unit of code with a set up function that gets invoked before every test
> -case and then a tear down function that gets invoked after every test case
> -completes.
> -
> -Example:
> +We need many test cases covering all the unit's behaviors. It is common to have
> +many similar tests. In order to reduce duplication in these closely related
> +tests, most unit testing frameworks (including KUnit) provide the concept of a
> +*test suite*. A test suite is a collection of test cases for a unit of code
> +with a setup function that gets invoked before every test case and then a tear
> +down function that gets invoked after every test case completes. For example:
> 
>  .. code-block:: c
> 
> @@ -202,23 +145,48 @@ Example:
>  	};
>  	kunit_test_suite(example_test_suite);
> 
> -In the above example the test suite, ``example_test_suite``, would run the test
> -cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``;
> -each would have ``example_test_init`` called immediately before it and would
> -have ``example_test_exit`` called immediately after it.
> +In the above example, the test suite ``example_test_suite`` would run the test
> +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``. Each
> +would have ``example_test_init`` called immediately before it and
> +``example_test_exit`` called immediately after it.
>  ``kunit_test_suite(example_test_suite)`` registers the test suite with the
>  KUnit test framework.
> 
>  .. note::
> -   A test case will only be run if it is associated with a test suite.
> +   A test case will only run if it is associated with a test suite.
> 
> -``kunit_test_suite(...)`` is a macro which tells the linker to put the specified
> -test suite in a special linker section so that it can be run by KUnit either
> -after late_init, or when the test module is loaded (depending on whether the
> -test was built in or not).
> +``kunit_test_suite(...)`` is a macro which tells the linker to put the
> +specified test suite in a special linker section so that it can be run by KUnit
> +either after ``late_init``, or when the test module is loaded (if the test was
> +built as a module).
> 
> -For more information on these types of things see the
> -Documentation/dev-tools/kunit/api/test.rst.
> +For more information, see Documentation/dev-tools/kunit/api/test.rst.
> +
> +Writing Tests For Other Architectures
> +-------------------------------------
> +
> +Always prefer tests that run on UML to tests that only run under a particular
Always prefer tests -> It is better to write tests

> +architecture. In addition, prefer tests that run under QEMU or another easy
prefer tests -> it is better to write tests

> +(and monetarily free) to obtain software environment to a specific piece of
easy (and monetarily free) to obtain software environment ->
  easy to obtain (and monetarily free) software environment

(ie - you shouldn't split up 'easy to obtain')

environment to a specific -> rather than tests that require a specific

> +hardware.
> +
> +Nevertheless, there are still valid reasons to write an architecture or

an architecture or hardware specific test ->
  a test that is architecture or hardware specific

> +hardware specific test. For example, we might want to test code that really
> +belongs in ``arch/some-arch/*``. Even so, try to write the test so that it does
> +not depend on physical hardware. Some of our test cases may not need hardware,
> +only few tests actually require the hardware to test it. When hardware is not
> +available, instead of disabling tests, we can skip them.
> +
> +Now that we have narrowed down exactly what bits are hardware specific, the
> +actual procedure for writing and running the tests is same as writing normal
> +KUnit tests.
> +
> +.. important::
> +   We may have to reset hardware state. If this is not possible, we may only
> +   be able to run one test case per invocation.
> +
> +.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
> +   dependent KUnit test.
> 
>  Common Patterns
>  ===============
> @@ -226,43 +194,39 @@ Common Patterns
>  Isolating Behavior
>  ------------------
> 
> -The most important aspect of unit testing that other forms of testing do not
> -provide is the ability to limit the amount of code under test to a single unit.
> -In practice, this is only possible by being able to control what code gets run
> -when the unit under test calls a function and this is usually accomplished
> -through some sort of indirection where a function is exposed as part of an API
> -such that the definition of that function can be changed without affecting the
> -rest of the code base. In the kernel this primarily comes from two constructs,
> -classes, structs that contain function pointers that are provided by the
> -implementer, and architecture-specific functions which have definitions selected
> -at compile time.
> +Unit testing limits the amount of code under test to a single unit. It controls
> +what code gets run when the unit under test calls a function. Where a function
> +is exposed as part of an API such that the definition of that function can be
> +changed without affecting the rest of the code base. In the kernel, this comes
> +from two constructs: classes, structs. that contain function pointers provided

??? I couldn't parse this.

classes, structs. that contain ->
   classes, which are structs that contain

> +by the implementer and architecture specific functions which have definitions

by the implementer and architecture specific functions which have ->
  by the implementer, and architecture-specific functions, which have

> +selected at compile time.

I'm not sure if the second comma is needed.  It depends on whether the clause
'which have definitions selected at compile time' is intended to describe the
architecture-specific functions, or constrain them.

> 
>  Classes
>  ~~~~~~~
> 
>  Classes are not a construct that is built into the C programming language;
> -however, it is an easily derived concept. Accordingly, pretty much every project
> -that does not use a standardized object oriented library (like GNOME's GObject)
> -has their own slightly different way of doing object oriented programming; the
> -Linux kernel is no exception.
> +however, it is an easily derived concept. Accordingly, in most cases, every
> +project that does not use a standardized object oriented library (like GNOME's
> +GObject) has their own slightly different way of doing object oriented
> +programming; the Linux kernel is no exception.
> 
>  The central concept in kernel object oriented programming is the class. In the
>  kernel, a *class* is a struct that contains function pointers. This creates a
>  contract between *implementers* and *users* since it forces them to use the
> -same function signature without having to call the function directly. In order
> -for it to truly be a class, the function pointers must specify that a pointer
> -to the class, known as a *class handle*, be one of the parameters; this makes
> -it possible for the member functions (also known as *methods*) to have access
> -to member variables (more commonly known as *fields*) allowing the same
> -implementation to have multiple *instances*.
> -
> -Typically a class can be *overridden* by *child classes* by embedding the
> -*parent class* in the child class. Then when a method provided by the child
> -class is called, the child implementation knows that the pointer passed to it is
> -of a parent contained within the child; because of this, the child can compute
> -the pointer to itself because the pointer to the parent is always a fixed offset
> -from the pointer to the child; this offset is the offset of the parent contained
> -in the child struct. For example:
> +same function signature without having to call the function directly. To be a
> +class, the function pointers must specify that a pointer to the class, known as
> +a *class handle*, be one of the parameters. Thus the member functions (also
> +known as *methods*) have access to member variables (also known as *fields*)
> +allowing the same implementation to have multiple *instances*.
> +
> +A class can be *overridden* by *child classes* by embedding the *parent class*
> +in the child class. Then when the child class *method* is called, the child
> +implementation knows that the pointer passed to it is of a parent contained
> +within the child. Thus, the child can compute the pointer to itself because the
> +pointer to the parent is always a fixed offset from the pointer to the child.
> +This offset is the offset of the parent contained in the child struct. For
> +example:
> 
>  .. code-block:: c
> 
> @@ -290,8 +254,8 @@ in the child struct. For example:
>  		self->width = width;
>  	}
> 
> -In this example (as in most kernel code) the operation of computing the pointer
> -to the child from the pointer to the parent is done by ``container_of``.
> +In this example, computing the pointer to the child from the pointer to the
> +parent is done by ``container_of``.
> 
>  Faking Classes
>  ~~~~~~~~~~~~~~
> @@ -300,14 +264,11 @@ In order to unit test a piece of code that calls a method in a class, the
>  behavior of the method must be controllable, otherwise the test ceases to be a
>  unit test and becomes an integration test.
> 
> -A fake just provides an implementation of a piece of code that is different than
> -what runs in a production instance, but behaves identically from the standpoint
> -of the callers; this is usually done to replace a dependency that is hard to
> -deal with, or is slow.
> -
> -A good example for this might be implementing a fake EEPROM that just stores the
> -"contents" in an internal buffer. For example, let's assume we have a class that
> -represents an EEPROM:
> +A fake class implements a piece of code that is different than what runs in a
> +production instance, but behaves identical from the standpoint of the callers.
> +This is done to replace a dependency that is hard to deal with, or is slow. For
> +example, implementing a fake EEPROM that stores the "contents" in an
> +internal buffer. Assume we have a class that represents an EEPROM:
> 
>  .. code-block:: c
> 
> @@ -316,7 +277,7 @@ represents an EEPROM:
>  		ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
>  	};
> 
> -And we want to test some code that buffers writes to the EEPROM:
> +We want to test code that buffers writes to the EEPROM:

We -> And we

(Please leave the 'and')

> 
>  .. code-block:: c
> 
> @@ -329,7 +290,7 @@ And we want to test some code that buffers writes to the EEPROM:
>  	struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
>  	void destroy_eeprom_buffer(struct eeprom *eeprom);
> 
> -We can easily test this code by *faking out* the underlying EEPROM:
> +We can test this code by *faking out* the underlying EEPROM:
> 
>  .. code-block:: c
> 
> @@ -456,14 +417,14 @@ We can now use it to test ``struct eeprom_buffer``:
>  		destroy_eeprom_buffer(ctx->eeprom_buffer);
>  	}
> 
> -Testing against multiple inputs
> +Testing Against Multiple Inputs
>  -------------------------------
> 
> -Testing just a few inputs might not be enough to have confidence that the code
> -works correctly, e.g. for a hash function.
> +Testing just a few inputs is not enough to ensure that the code works correctly,
> +for example: testing a hash function.
> 
> -In such cases, it can be helpful to have a helper macro or function, e.g. this
> -fictitious example for ``sha1sum(1)``
> +We can write a helper macro or function. The function is called for each input.
> +For example, to test ``sha1sum(1)``, we can write:
> 
>  .. code-block:: c
> 
> @@ -475,16 +436,15 @@ fictitious example for ``sha1sum(1)``
>  	TEST_SHA1("hello world",  "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed");
>  	TEST_SHA1("hello world!", "430ce34d020724ed75a196dfc2ad67c77772d169");
> 
> +Note the use of the ``_MSG`` version of ``KUNIT_EXPECT_STREQ`` to print a more
> +detailed error and make the assertions clearer within the helper macros.
> 
> -Note the use of ``KUNIT_EXPECT_STREQ_MSG`` to give more context when it fails
> -and make it easier to track down. (Yes, in this example, ``want`` is likely
> -going to be unique enough on its own).
> +The ``_MSG`` variants are useful when the same expectation is called multiple
> +times (in a loop or helper function) and thus the line number is not enough to
> +identify what failed, as shown below.
> 
> -The ``_MSG`` variants are even more useful when the same expectation is called
> -multiple times (in a loop or helper function) and thus the line number isn't
> -enough to identify what failed, like below.
> -
> -In some cases, it can be helpful to write a *table-driven test* instead, e.g.
> +In complicated cases, we recommend using a *table-driven test* compared to the
> +helper macro variation, for example:
> 
>  .. code-block:: c
> 
> @@ -513,17 +473,18 @@ In some cases, it can be helpful to write a *table-driven test* instead, e.g.
>  	}
> 
> 
> -There's more boilerplate involved, but it can:
> +There is more boilerplate code involved, but it can:
> +
> +* be more readable when there are multiple inputs/outputs (due to field names).
> 
> -* be more readable when there are multiple inputs/outputs thanks to field names,
> +  * For example, see ``fs/ext4/inode-test.c``.
> 
> -  * E.g. see ``fs/ext4/inode-test.c`` for an example of both.
> -* reduce duplication if test cases can be shared across multiple tests.
> +* reduce duplication if test cases are shared across multiple tests.
> 
> -  * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256``
> +  * For example: if we want to test ``sha256sum``, we could add a ``sha256``
>      field and reuse ``cases``.
> 
> -* be converted to a "parameterized test", see below.
> +* be converted to a "parameterized test".
> 
>  Parameterized Testing
>  ~~~~~~~~~~~~~~~~~~~~~
> @@ -531,7 +492,7 @@ Parameterized Testing
>  The table-driven testing pattern is common enough that KUnit has special
>  support for it.
> 
> -Reusing the same ``cases`` array from above, we can write the test as a
> +By reusing the same ``cases`` array from above, we can write the test as a
>  "parameterized test" with the following.
> 
>  .. code-block:: c
> @@ -582,193 +543,152 @@ Reusing the same ``cases`` array from above, we can write the test as a
> 
>  .. _kunit-on-non-uml:
> 
> -KUnit on non-UML architectures
> -==============================
> -
> -By default KUnit uses UML as a way to provide dependencies for code under test.
> -Under most circumstances KUnit's usage of UML should be treated as an
> -implementation detail of how KUnit works under the hood. Nevertheless, there
> -are instances where being able to run architecture-specific code or test
> -against real hardware is desirable. For these reasons KUnit supports running on
> -other architectures.
> -
> -Running existing KUnit tests on non-UML architectures
> ------------------------------------------------------
> +Exiting Early on Failed Expectations
> +------------------------------------
> 
> -There are some special considerations when running existing KUnit tests on
> -non-UML architectures:
> +We can use ``KUNIT_EXPECT_EQ`` to mark the test as failed and continue
> +execution.  In some cases, it is unsafe to continue. We can use the
> +``KUNIT_ASSERT`` variant to exit on failure.
> 
> -*   Hardware may not be deterministic, so a test that always passes or fails
> -    when run under UML may not always do so on real hardware.
> -*   Hardware and VM environments may not be hermetic. KUnit tries its best to
> -    provide a hermetic environment to run tests; however, it cannot manage state
> -    that it doesn't know about outside of the kernel. Consequently, tests that
> -    may be hermetic on UML may not be hermetic on other architectures.
> -*   Some features and tooling may not be supported outside of UML.
> -*   Hardware and VMs are slower than UML.
> +.. code-block:: c
> 
> -None of these are reasons not to run your KUnit tests on real hardware; they are
> -only things to be aware of when doing so.
> +	void example_test_user_alloc_function(struct kunit *test)
> +	{
> +		void *object = alloc_some_object_for_me();
> 
> -Currently, the KUnit Wrapper (``tools/testing/kunit/kunit.py``) (aka
> -kunit_tool) only fully supports running tests inside of UML and QEMU; however,
> -this is only due to our own time limitations as humans working on KUnit. It is
> -entirely possible to support other emulators and even actual hardware, but for
> -now QEMU and UML is what is fully supported within the KUnit Wrapper. Again, to
> -be clear, this is just the Wrapper. The actualy KUnit tests and the KUnit
> -library they are written in is fully architecture agnostic and can be used in
> -virtually any setup, you just won't have the benefit of typing a single command
> -out of the box and having everything magically work perfectly.
> +		/* Make sure we got a valid pointer back. */
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, object);
> +		do_something_with_object(object);
> +	}
> 
> -Again, all core KUnit framework features are fully supported on all
> -architectures, and using them is straightforward: Most popular architectures
> -are supported directly in the KUnit Wrapper via QEMU. Currently, supported
> -architectures on QEMU include:
> +Allocating Memory
> +-----------------
> 
> -*   i386
> -*   x86_64
> -*   arm
> -*   arm64
> -*   alpha
> -*   powerpc
> -*   riscv
> -*   s390
> -*   sparc
> +We can use ``kzalloc``, you should prefer ``kunit_kzalloc`` and KUnit will

???

We can use ``kzalloc``, you should prefer ``kunit_kzalloc`` and KUnit will ->
  Where you might use ``kzalloc``, you can instead use ``kunit_kzalloc`` and KUnit will

> +ensure that the memory is freed once the test completes.
> 
> -In order to run KUnit tests on one of these architectures via QEMU with the
> -KUnit wrapper, all you need to do is specify the flags ``--arch`` and
> -``--cross_compile`` when invoking the KUnit Wrapper. For example, we could run
> -the default KUnit tests on ARM in the following manner (assuming we have an ARM
> -toolchain installed):
> +This is useful because it lets us use the ``KUNIT_ASSERT_EQ`` macros to exit
> +early from a test without having to worry about remembering to call ``kfree``.
> +For example:
> 
> -.. code-block:: bash
> +.. code-block:: c
> 
> -	tools/testing/kunit/kunit.py run --timeout=60 --jobs=12 --arch=arm --cross_compile=arm-linux-gnueabihf-
> +	void example_test_allocation(struct kunit *test)
> +	{
> +		char *buffer = kunit_kzalloc(test, 16, GFP_KERNEL);
> +		/* Ensure allocation succeeded. */
> +		KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
> 
> -Alternatively, if you want to run your tests on real hardware or in some other
> -emulation environment, all you need to do is to take your kunitconfig, your
> -Kconfig options for the tests you would like to run, and merge them into
> -whatever config your are using for your platform. That's it!
> +		KUNIT_ASSERT_STREQ(test, buffer, "");
> +	}
> 
> -For example, let's say you have the following kunitconfig:
> 
> -.. code-block:: none
> +Testing Static Functions
> +------------------------
> 
> -	CONFIG_KUNIT=y
> -	CONFIG_KUNIT_EXAMPLE_TEST=y
> +If we do not want to expose functions or variables for testing, one option is to
> +conditionally ``#include`` the test file at the end of your .c file. For
> +example:
> 
> -If you wanted to run this test on an x86 VM, you might add the following config
> -options to your ``.config``:
> +.. code-block:: c
> 
> -.. code-block:: none
> +	/* In my_file.c */
> 
> -	CONFIG_KUNIT=y
> -	CONFIG_KUNIT_EXAMPLE_TEST=y
> -	CONFIG_SERIAL_8250=y
> -	CONFIG_SERIAL_8250_CONSOLE=y
> +	static int do_interesting_thing();
> 
> -All these new options do is enable support for a common serial console needed
> -for logging.
> +	#ifdef CONFIG_MY_KUNIT_TEST
> +	#include "my_kunit_test.c"
> +	#endif
> 
> -Next, you could build a kernel with these tests as follows:
> +Injecting Test-Only Code
> +------------------------
> 
> +Similar to as shown above, we can add test-specific logic. For example:
> 
> -.. code-block:: bash
> +.. code-block:: c
> 
> -	make ARCH=x86 olddefconfig
> -	make ARCH=x86
> +	/* In my_file.h */
> 
> -Once you have built a kernel, you could run it on QEMU as follows:
> +	#ifdef CONFIG_MY_KUNIT_TEST
> +	/* Defined in my_kunit_test.c */
> +	void test_only_hook(void);
> +	#else
> +	void test_only_hook(void) { }
> +	#endif
> 
> -.. code-block:: bash
> +This test-only code can be made more useful by accessing the current ``kunit_test``
> +as shown in next section: *Accessing The Current Test*.
> 
> -	qemu-system-x86_64 -enable-kvm \
> -			   -m 1024 \
> -			   -kernel arch/x86_64/boot/bzImage \
> -			   -append 'console=ttyS0' \
> -			   --nographic
> +Accessing The Current Test
> +--------------------------
> 
> -Interspersed in the kernel logs you might see the following:
> +In some cases, we need to call test-only code from outside the test file.
> +For example, see example in section *Injecting Test-Only Code* or if
> +we are providing a fake implementation of an ops struct. Using
> +``kunit_test`` field in ``task_struct``, we can access it via
> +``current->kunit_test``.
> 
> -.. code-block:: none
> +Below example includes how to implement "mocking":

Below example -> The example below

> 
> -	TAP version 14
> -		# Subtest: example
> -		1..1
> -		# example_simple_test: initializing
> -		ok 1 - example_simple_test
> -	ok 1 - example
> +.. code-block:: c
> 
> -Congratulations, you just ran a KUnit test on the x86 architecture!
> +	#include <linux/sched.h> /* for current */
> 
> -In a similar manner, kunit and kunit tests can also be built as modules,
> -so if you wanted to run tests in this way you might add the following config
> -options to your ``.config``:
> +	struct test_data {
> +		int foo_result;
> +		int want_foo_called_with;
> +	};
> 
> -.. code-block:: none
> +	static int fake_foo(int arg)
> +	{
> +		struct kunit *test = current->kunit_test;
> +		struct test_data *test_data = test->priv;
> 
> -	CONFIG_KUNIT=m
> -	CONFIG_KUNIT_EXAMPLE_TEST=m
> +		KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg);
> +		return test_data->foo_result;
> +	}
> 
> -Once the kernel is built and installed, a simple
> +	static void example_simple_test(struct kunit *test)
> +	{
> +		/* Assume priv is allocated in the suite's .init */
> +		struct test_data *test_data = test->priv;

I found this description and example hard to follow.  This is possibly due
to the patch being intermingled with the deletion of completely unrelated
lines.

Does 'priv' stand for privilege, or private?  I assume the latter, but maybe mention
the meaning of this?  Is 'priv' a field reserved in the kunit_test structure for passing
arbitrary data to the test function?

The lifecycle of the data in test->priv is a unclear to me.  Here, the data appears
to be static, but it's unclear why you would need to pass a structure containing static data
to the test function.  Would the data for these fields (want_foo_called_with and foo_result)
be filled in at test invocation time from a list (like from parameterized tests)?

> 
> -.. code-block:: bash
> +		test_data->foo_result = 42;
> +		test_data->want_foo_called_with = 1;
> 
> -	modprobe example-test
> +		/* In a real test, we'd probably pass a pointer to fake_foo somewhere
> +		 * like an ops struct, etc. instead of calling it directly. */
> +		KUNIT_EXPECT_EQ(test, fake_foo(1), 42);
> +	}

OK - I'm totally lost at this point.

> 
> -...will run the tests.
> 
> -.. note::
> -   Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig
> -   if the test does not support module build.  Otherwise, it will trigger
> -   compile errors if ``CONFIG_KUNIT`` is ``m``.
> +Note: here we are able to get away with using ``test->priv``, but if we want
> +something more flexible we could use a named ``kunit_resource``, see
> +Documentation/dev-tools/kunit/api/test.rst.
> 
> -Writing new tests for other architectures
> ------------------------------------------
> +Failing The Current Test
> +------------------------
> 
> -The first thing you must do is ask yourself whether it is necessary to write a
> -KUnit test for a specific architecture, and then whether it is necessary to
> -write that test for a particular piece of hardware. In general, writing a test
> -that depends on having access to a particular piece of hardware or software (not
> -included in the Linux source repo) should be avoided at all costs.
> +If we want to fail the current test, we can use ``kunit_fail_current_test(fmt, args...)``
> +which is defined in ``<kunit/test-bug.h>`` and does not require pulling in ``<kunit/test.h>``.
> +For example, we have an option to enable some extra debug checks on some data
> +structures as shown below:
> 
> -Even if you only ever plan on running your KUnit test on your hardware
> -configuration, other people may want to run your tests and may not have access
> -to your hardware. If you write your test to run on UML, then anyone can run your
> -tests without knowing anything about your particular setup, and you can still
> -run your tests on your hardware setup just by compiling for your architecture.
> +.. code-block:: c
> 
> -.. important::
> -   Always prefer tests that run on UML to tests that only run under a particular
> -   architecture, and always prefer tests that run under QEMU or another easy
> -   (and monetarily free) to obtain software environment to a specific piece of
> -   hardware.
> -
> -Nevertheless, there are still valid reasons to write an architecture or hardware
> -specific test: for example, you might want to test some code that really belongs
> -in ``arch/some-arch/*``. Even so, try your best to write the test so that it
> -does not depend on physical hardware: if some of your test cases don't need the
> -hardware, only require the hardware for tests that actually need it.
> -
> -Now that you have narrowed down exactly what bits are hardware specific, the
> -actual procedure for writing and running the tests is pretty much the same as
> -writing normal KUnit tests. One special caveat is that you have to reset
> -hardware state in between test cases; if this is not possible, you may only be
> -able to run one test case per invocation.
> +	#include <kunit/test-bug.h>
> 
> -.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
> -   dependent KUnit test.
> +	#ifdef CONFIG_EXTRA_DEBUG_CHECKS
> +	static void validate_my_data(struct data *data)
> +	{
> +		if (is_valid(data))
> +			return;
> 
> -KUnit debugfs representation
> -============================
> -When kunit test suites are initialized, they create an associated directory
> -in ``/sys/kernel/debug/kunit/<test-suite>``.  The directory contains one file
> +		kunit_fail_current_test("data %p is invalid", data);
> 
> -- results: "cat results" displays results of each test case and the results
> -  of the entire suite for the last test run.
> +		/* Normal, non-KUnit, error reporting code here. */
> +	}
> +	#else
> +	static void my_debug_function(void) { }
> +	#endif
> 
> -The debugfs representation is primarily of use when kunit test suites are
> -run in a native environment, either as modules or builtin.  Having a way
> -to display results like this is valuable as otherwise results can be
> -intermixed with other events in dmesg output.  The maximum size of each
> -results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``).
> --
> 2.34.1.400.ga245620fadb-goog


Please provide some more explanation about 
accessing the KUnit test at runtime.  I couldn't
follow what was going on in that section.
 -- Tim


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page
  2021-12-07  5:40 ` [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page Harinder Singh
@ 2021-12-07 18:46   ` Tim.Bird
  2021-12-10  5:30     ` Harinder Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-07 18:46 UTC (permalink / raw)
  To: sharinder, davidgow, brendanhiggins, shuah, corbet
  Cc: linux-kselftest, kunit-dev, linux-doc, linux-kernel



> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> Rewrite page to enhance content consistency.
> 
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---
>  Documentation/dev-tools/kunit/style.rst | 101 ++++++++++++------------
>  1 file changed, 49 insertions(+), 52 deletions(-)
> 
> diff --git a/Documentation/dev-tools/kunit/style.rst b/Documentation/dev-tools/kunit/style.rst
> index 8dbcdc552606..8fae192cae28 100644
> --- a/Documentation/dev-tools/kunit/style.rst
> +++ b/Documentation/dev-tools/kunit/style.rst
> @@ -4,37 +4,36 @@
>  Test Style and Nomenclature
>  ===========================
> 
> -To make finding, writing, and using KUnit tests as simple as possible, it's
> +To make finding, writing, and using KUnit tests as simple as possible, it is
>  strongly encouraged that they are named and written according to the guidelines
> -below. While it's possible to write KUnit tests which do not follow these rules,
> +below. While it is possible to write KUnit tests which do not follow these rules,
>  they may break some tooling, may conflict with other tests, and may not be run
>  automatically by testing systems.
> 
> -It's recommended that you only deviate from these guidelines when:
> +It is recommended that you only deviate from these guidelines when:
> 
> -1. Porting tests to KUnit which are already known with an existing name, or
> -2. Writing tests which would cause serious problems if automatically run (e.g.,
> -   non-deterministically producing false positives or negatives, or taking an
> -   extremely long time to run).
> +1. Porting tests to KUnit which are already known with an existing name.
> +2. Writing tests which would cause serious problems if automatically run. For
> +   example, non-deterministically producing false positives or negatives, or
> +   taking a long time to run.
> 
>  Subsystems, Suites, and Tests
>  =============================
> 
> -In order to make tests as easy to find as possible, they're grouped into suites
> -and subsystems. A test suite is a group of tests which test a related area of
> -the kernel, and a subsystem is a set of test suites which test different parts
> -of the same kernel subsystem or driver.
> +To make tests easy to find, they are grouped into suites and subsystems. A test
> +suite is a group of tests which test a related area of the kernel. A subsystem
> +is a set of test suites which test different parts of a kernel subsystem
> +or a driver.
> 
>  Subsystems
>  ----------
> 
>  Every test suite must belong to a subsystem. A subsystem is a collection of one
>  or more KUnit test suites which test the same driver or part of the kernel. A
> -rule of thumb is that a test subsystem should match a single kernel module. If
> -the code being tested can't be compiled as a module, in many cases the subsystem
> -should correspond to a directory in the source tree or an entry in the
> -MAINTAINERS file. If unsure, follow the conventions set by tests in similar
> -areas.
> +test subsystem should match a single kernel module. If the code being tested
> +cannot be compiled as a module, in many cases the subsystem should correspond to
> +a directory in the source tree or an entry in the ``MAINTAINERS`` file. If
> +unsure, follow the conventions set by tests in similar areas.
> 
>  Test subsystems should be named after the code being tested, either after the
>  module (wherever possible), or after the directory or files being tested. Test
> @@ -42,9 +41,8 @@ subsystems should be named to avoid ambiguity where necessary.
> 
>  If a test subsystem name has multiple components, they should be separated by
>  underscores. *Do not* include "test" or "kunit" directly in the subsystem name
> -unless you are actually testing other tests or the kunit framework itself.
> -
> -Example subsystems could be:
> +unless we are actually testing other tests or the kunit framework itself. For
> +example, subsystems could be called:
> 
>  ``ext4``
>    Matches the module and filesystem name.
> @@ -56,13 +54,13 @@ Example subsystems could be:
>    Has several components (``snd``, ``hda``, ``codec``, ``hdmi``) separated by
>    underscores. Matches the module name.
> 
> -Avoid names like these:
> +Avoid names as shown in examples below:
> 
>  ``linear-ranges``
>    Names should use underscores, not dashes, to separate words. Prefer
>    ``linear_ranges``.
>  ``qos-kunit-test``
> -  As well as using underscores, this name should not have "kunit-test" as a
> +  This name should not use underscores, not have "kunit-test" as a

This contradicts the preceding sentence.  I believe you have changed the sense
of the recommendation.

This name should not use underscores, not have ->
   This name should use underscores, and not have

>    suffix, and ``qos`` is ambiguous as a subsystem name. ``power_qos`` would be a

suffix, and ``qos`` -> suffix.  Also ``qos``

(The way this sentence was originally structured was quite awkward.  I think it's
better to split into two sentences)

>    better name.
>  ``pc_parallel_port``
> @@ -70,34 +68,32 @@ Avoid names like these:
>    be named ``parport_pc``.
> 
>  .. note::
> -        The KUnit API and tools do not explicitly know about subsystems. They're
> -        simply a way of categorising test suites and naming modules which
> -        provides a simple, consistent way for humans to find and run tests. This
> -        may change in the future, though.
> +        The KUnit API and tools do not explicitly know about subsystems. They are
> +        a way of categorising test suites and naming modules which provides a
> +        simple, consistent way for humans to find and run tests. This may change
> +        in the future.
> 
>  Suites
>  ------
> 
>  KUnit tests are grouped into test suites, which cover a specific area of
>  functionality being tested. Test suites can have shared initialisation and

'initialization' seems to be preferred to 'initialisation' in most other
kernel documentation.  (557 instances of 'initialization' to 58 of 'initialisation')

(I know this isn't part of your patch, but since this is a cleanup and consistency
patch, maybe change this as well?)

> -shutdown code which is run for all tests in the suite.
> -Not all subsystems will need to be split into multiple test suites (e.g. simple drivers).
> +shutdown code which is run for all tests in the suite. Not all subsystems need
> +to be split into multiple test suites (for example, simple drivers).
> 
>  Test suites are named after the subsystem they are part of. If a subsystem
>  contains several suites, the specific area under test should be appended to the
>  subsystem name, separated by an underscore.
> 
>  In the event that there are multiple types of test using KUnit within a
> -subsystem (e.g., both unit tests and integration tests), they should be put into
> -separate suites, with the type of test as the last element in the suite name.
> -Unless these tests are actually present, avoid using ``_test``, ``_unittest`` or
> -similar in the suite name.
> +subsystem (for example, both unit tests and integration tests), they should be
> +put into separate suites, with the type of test as the last element in the suite
> +name. Unless these tests are actually present, avoid using ``_test``, ``_unittest``
> +or similar in the suite name.
> 
>  The full test suite name (including the subsystem name) should be specified as
>  the ``.name`` member of the ``kunit_suite`` struct, and forms the base for the
> -module name (see below).
> -
> -Example test suites could include:
> +module name. For example, test suites could include:
> 
>  ``ext4_inode``
>    Part of the ``ext4`` subsystem, testing the ``inode`` area.
> @@ -109,26 +105,27 @@ Example test suites could include:
>    The ``kasan`` subsystem has only one suite, so the suite name is the same as
>    the subsystem name.
> 
> -Avoid names like:
> +Avoid names, for example:
> 
>  ``ext4_ext4_inode``
> -  There's no reason to state the subsystem twice.
> +  There is no reason to state the subsystem twice.
>  ``property_entry``
>    The suite name is ambiguous without the subsystem name.
>  ``kasan_integration_test``
>    Because there is only one suite in the ``kasan`` subsystem, the suite should
> -  just be called ``kasan``. There's no need to redundantly add
> -  ``integration_test``. Should a separate test suite with, for example, unit
> -  tests be added, then that suite could be named ``kasan_unittest`` or similar.
> +  just be called as ``kasan``. Do not redundantly add
> +  ``integration_test``. It should be a separate test suite. For example, if the
> +  unit tests are added, then that suite could be named as ``kasan_unittest`` or
> +  similar.
> 
>  Test Cases
>  ----------
> 
>  Individual tests consist of a single function which tests a constrained
> -codepath, property, or function. In the test output, individual tests' results
> -will show up as subtests of the suite's results.
> +codepath, property, or function. In the test output, an individual test's
> +results will show up as subtests of the suite's results.
> 
> -Tests should be named after what they're testing. This is often the name of the
> +Tests should be named after what they are testing. This is often the name of the
>  function being tested, with a description of the input or codepath being tested.
>  As tests are C functions, they should be named and written in accordance with
>  the kernel coding style.
> @@ -136,7 +133,7 @@ the kernel coding style.
>  .. note::
>          As tests are themselves functions, their names cannot conflict with
>          other C identifiers in the kernel. This may require some creative
> -        naming. It's a good idea to make your test functions `static` to avoid
> +        naming. It is a good idea to make your test functions `static` to avoid
>          polluting the global namespace.
> 
>  Example test names include:
> @@ -150,7 +147,7 @@ Example test names include:
> 
>  Should it be necessary to refer to a test outside the context of its test suite,
>  the *fully-qualified* name of a test should be the suite name followed by the
> -test name, separated by a colon (i.e. ``suite:test``).
> +test name, separated by a colon (``suite:test``).

Please leave the 'i.e.'

> 
>  Test Kconfig Entries
>  ====================
> @@ -162,16 +159,16 @@ This Kconfig entry must:
>  * be named ``CONFIG_<name>_KUNIT_TEST``: where <name> is the name of the test
>    suite.
>  * be listed either alongside the config entries for the driver/subsystem being
> -  tested, or be under [Kernel Hacking]→[Kernel Testing and Coverage]
> -* depend on ``CONFIG_KUNIT``
> +  tested, or be under [Kernel Hacking]->[Kernel Testing and Coverage]
> +* depend on ``CONFIG_KUNIT``.
>  * be visible only if ``CONFIG_KUNIT_ALL_TESTS`` is not enabled.
>  * have a default value of ``CONFIG_KUNIT_ALL_TESTS``.
> -* have a brief description of KUnit in the help text
> +* have a brief description of KUnit in the help text.
> 
> -Unless there's a specific reason not to (e.g. the test is unable to be built as
> -a module), Kconfig entries for tests should be tristate.
> +If we are not able to meet above conditions (for example, the test is unable to
> +be built as a module), Kconfig entries for tests should be tristate.
> 
> -An example Kconfig entry:
> +For example, a Kconfig entry might look like:
> 
>  .. code-block:: none
> 
> @@ -182,8 +179,8 @@ An example Kconfig entry:
>  		help
>  		  This builds unit tests for foo.
> 
> -		  For more information on KUnit and unit tests in general, please refer
> -		  to the KUnit documentation in Documentation/dev-tools/kunit/.
> +		  For more information on KUnit and unit tests in general,
> +		  please refer to the KUnit documentation in Documentation/dev-tools/kunit/.
> 
>  		  If unsure, say N.
> 
> --
> 2.34.1.400.ga245620fadb-goog

Thanks for the cleanups.
 -- Tim


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/7] Documentation: KUnit: Rewrite main page
  2021-12-07 17:11   ` Tim.Bird
@ 2021-12-10  5:30     ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-10  5:30 UTC (permalink / raw)
  To: tim.bird
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

Thanks for your comments.

See my comments below.

On Tue, Dec 7, 2021 at 10:41 PM <Tim.Bird@sony.com> wrote:
>
> See one additional suggestion below.
>  -- Tim
>
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > Add a section on advantages of unit testing, how to write unit tests,
> > KUnit features and Prerequisites.
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
> >  Documentation/dev-tools/kunit/index.rst | 166 +++++++++++++-----------
> >  1 file changed, 88 insertions(+), 78 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> > index cacb35ec658d..ebf4bffaa1ca 100644
> > --- a/Documentation/dev-tools/kunit/index.rst
> > +++ b/Documentation/dev-tools/kunit/index.rst
> > @@ -1,11 +1,12 @@
> >  .. SPDX-License-Identifier: GPL-2.0
> >
> > -=========================================
> > -KUnit - Unit Testing for the Linux Kernel
> > -=========================================
> > +=================================
> > +KUnit - Linux Kernel Unit Testing
> > +=================================
> >
> >  .. toctree::
> >       :maxdepth: 2
> > +     :caption: Contents:
> >
> >       start
> >       usage
> > @@ -16,82 +17,91 @@ KUnit - Unit Testing for the Linux Kernel
> >       tips
> >       running_tips
> >
> > -What is KUnit?
> > -==============
> > -
> > -KUnit is a lightweight unit testing and mocking framework for the Linux kernel.
> > -
> > -KUnit is heavily inspired by JUnit, Python's unittest.mock, and
> > -Googletest/Googlemock for C++. KUnit provides facilities for defining unit test
> > -cases, grouping related test cases into test suites, providing common
> > -infrastructure for running tests, and much more.
> > -
> > -KUnit consists of a kernel component, which provides a set of macros for easily
> > -writing unit tests. Tests written against KUnit will run on kernel boot if
> > -built-in, or when loaded if built as a module. These tests write out results to
> > -the kernel log in `TAP <https://testanything.org/>`_ format.
> > -
> > -To make running these tests (and reading the results) easier, KUnit offers
> > -:doc:`kunit_tool <kunit-tool>`, which builds a `User Mode Linux
> > -<http://user-mode-linux.sourceforge.net>`_ kernel, runs it, and parses the test
> > -results. This provides a quick way of running KUnit tests during development,
> > -without requiring a virtual machine or separate hardware.
> > -
> > -Get started now: Documentation/dev-tools/kunit/start.rst
> > -
> > -Why KUnit?
> > -==========
> > -
> > -A unit test is supposed to test a single unit of code in isolation, hence the
> > -name. A unit test should be the finest granularity of testing and as such should
> > -allow all possible code paths to be tested in the code under test; this is only
> > -possible if the code under test is very small and does not have any external
> > -dependencies outside of the test's control like hardware.
> > -
> > -KUnit provides a common framework for unit tests within the kernel.
> > -
> > -KUnit tests can be run on most architectures, and most tests are architecture
> > -independent. All built-in KUnit tests run on kernel startup.  Alternatively,
> > -KUnit and KUnit tests can be built as modules and tests will run when the test
> > -module is loaded.
> > -
> > -.. note::
> > -
> > -        KUnit can also run tests without needing a virtual machine or actual
> > -        hardware under User Mode Linux. User Mode Linux is a Linux architecture,
> > -        like ARM or x86, which compiles the kernel as a Linux executable. KUnit
> > -        can be used with UML either by building with ``ARCH=um`` (like any other
> > -        architecture), or by using :doc:`kunit_tool <kunit-tool>`.
> > -
> > -KUnit is fast. Excluding build time, from invocation to completion KUnit can run
> > -several dozen tests in only 10 to 20 seconds; this might not sound like a big
> > -deal to some people, but having such fast and easy to run tests fundamentally
> > -changes the way you go about testing and even writing code in the first place.
> > -Linus himself said in his `git talk at Google
> > -<https://gist.github.com/lorn/1272686/revisions#diff-53c65572127855f1b003db4064a94573R874>`_:
> > -
> > -     "... a lot of people seem to think that performance is about doing the
> > -     same thing, just doing it faster, and that is not true. That is not what
> > -     performance is all about. If you can do something really fast, really
> > -     well, people will start using it differently."
> > -
> > -In this context Linus was talking about branching and merging,
> > -but this point also applies to testing. If your tests are slow, unreliable, are
> > -difficult to write, and require a special setup or special hardware to run,
> > -then you wait a lot longer to write tests, and you wait a lot longer to run
> > -tests; this means that tests are likely to break, unlikely to test a lot of
> > -things, and are unlikely to be rerun once they pass. If your tests are really
> > -fast, you run them all the time, every time you make a change, and every time
> > -someone sends you some code. Why trust that someone ran all their tests
> > -correctly on every change when you can just run them yourself in less time than
> > -it takes to read their test log?
> > +This section details the kernel unit testing framework.
> > +
> > +Introduction
> > +============
> > +
> > +KUnit (Kernel unit testing framework) provides a common framework for
> > +unit tests within the Linux kernel. Using KUnit, you can define groups
> > +of test cases called test suites. The tests either run on kernel boot
> > +if built-in, or load as a module. KUnit automatically flags and reports
> > +failed test cases in the kernel log. The test results appear in `TAP
> > +(Test Anything Protocol) format <https://testanything.org/>`_. It is inspired by
> > +JUnit, Python’s unittest.mock, and GoogleTest/GoogleMock (C++ unit testing
> > +framework).
> > +
> > +KUnit tests are part of the kernel, written in the C (programming)
> > +language, and test parts of the Kernel implementation (example: a C
> > +language function). Excluding build time, from invocation to
> > +completion, KUnit can run around 100 tests in less than 10 seconds.
> > +KUnit can test any kernel component, for example: file system, system
> > +calls, memory management, device drivers and so on.
> > +
> > +KUnit follows the white-box testing approach. The test has access to
> > +internal system functionality. KUnit runs in kernel space and is not
> > +restricted to things exposed to user-space.
> > +
> > +In addition, KUnit has kunit_tool, a script (``tools/testing/kunit/kunit.py``)
> > +that configures the Linux kernel, runs KUnit tests under QEMU or UML (`User Mode
> > +Linux <http://user-mode-linux.sourceforge.net/>`_), parses the test results and
> > +displays them in a user friendly manner.
> > +
> > +Features
> > +--------
> > +
> > +- Provides a framework for writing unit tests.
> > +- Runs tests on any kernel architecture.
> > +- Runs a test in milliseconds.
> > +
> > +Prerequisites
> > +-------------
> > +
> > +- Any Linux kernel compatible hardware.
> > +- For Kernel under test, Linux kernel version 5.5 or greater.
> > +
> > +Unit Testing
> > +============
> > +
> > +A unit test tests a single unit of code in isolation. A unit test is the finest
> > +granularity of testing and allows all possible code paths to be tested in the
> > +code under test. This is possible if the code under test is small and does not
> > +have any external dependencies outside of the test's control like hardware.
> > +
> > +
> > +Write Unit Tests
> > +----------------
> > +
> > +To write good unit tests, there is a simple but powerful pattern:
> > +Arrange-Act-Assert. This is a great way to structure test cases and
> > +defines an order of operations.
> > +
> > +- Arrange inputs and targets: At the start of the test, arrange the data
> > +  that allows a function to work. Example: initialize a statement or
> > +  object.
> > +- Act on the target behavior: Call your function/code under test.
> > +- Assert expected outcome: Verify the result (or resulting state) as expected
> > +  or not.
>
> Verify the result (or resulting state) as expected or not ->
>    Verify that the result (or resulting state) is as expected or not
>

Done

>
> > +
> > +Unit Testing Advantages
> > +-----------------------
> > +
> > +- Increases testing speed and development in the long run.
> > +- Detects bugs at initial stage and therefore decreases bug fix cost
> > +  compared to acceptance testing.
> > +- Improves code quality.
> > +- Encourages writing testable code.
> >
> >  How do I use it?
> >  ================
> >
> > -*   Documentation/dev-tools/kunit/start.rst - for new users of KUnit
> > -*   Documentation/dev-tools/kunit/tips.rst - for short examples of best practices
> > -*   Documentation/dev-tools/kunit/usage.rst - for a more detailed explanation of KUnit features
> > -*   Documentation/dev-tools/kunit/api/index.rst - for the list of KUnit APIs used for testing
> > -*   Documentation/dev-tools/kunit/kunit-tool.rst - for more information on the kunit_tool helper script
> > -*   Documentation/dev-tools/kunit/faq.rst - for answers to some common questions about KUnit
> > +*   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
> > +*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> > +*   Documentation/dev-tools/kunit/tips.rst - best practices with
> > +    examples.
> > +*   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> > +    used for testing.
> > +*   Documentation/dev-tools/kunit/kunit-tool.rst - kunit_tool helper
> > +    script.
> > +*   Documentation/dev-tools/kunit/faq.rst - KUnit common questions and
> > +    answers.
> > --
> > 2.34.1.400.ga245620fadb-goog
>

Regards,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page
  2021-12-07 18:46   ` Tim.Bird
@ 2021-12-10  5:30     ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-10  5:30 UTC (permalink / raw)
  To: tim.bird
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

Thanks for the review comments.

Please see my comments below.

On Wed, Dec 8, 2021 at 12:16 AM <Tim.Bird@sony.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > Rewrite page to enhance content consistency.
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
> >  Documentation/dev-tools/kunit/style.rst | 101 ++++++++++++------------
> >  1 file changed, 49 insertions(+), 52 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kunit/style.rst b/Documentation/dev-tools/kunit/style.rst
> > index 8dbcdc552606..8fae192cae28 100644
> > --- a/Documentation/dev-tools/kunit/style.rst
> > +++ b/Documentation/dev-tools/kunit/style.rst
> > @@ -4,37 +4,36 @@
> >  Test Style and Nomenclature
> >  ===========================
> >
> > -To make finding, writing, and using KUnit tests as simple as possible, it's
> > +To make finding, writing, and using KUnit tests as simple as possible, it is
> >  strongly encouraged that they are named and written according to the guidelines
> > -below. While it's possible to write KUnit tests which do not follow these rules,
> > +below. While it is possible to write KUnit tests which do not follow these rules,
> >  they may break some tooling, may conflict with other tests, and may not be run
> >  automatically by testing systems.
> >
> > -It's recommended that you only deviate from these guidelines when:
> > +It is recommended that you only deviate from these guidelines when:
> >
> > -1. Porting tests to KUnit which are already known with an existing name, or
> > -2. Writing tests which would cause serious problems if automatically run (e.g.,
> > -   non-deterministically producing false positives or negatives, or taking an
> > -   extremely long time to run).
> > +1. Porting tests to KUnit which are already known with an existing name.
> > +2. Writing tests which would cause serious problems if automatically run. For
> > +   example, non-deterministically producing false positives or negatives, or
> > +   taking a long time to run.
> >
> >  Subsystems, Suites, and Tests
> >  =============================
> >
> > -In order to make tests as easy to find as possible, they're grouped into suites
> > -and subsystems. A test suite is a group of tests which test a related area of
> > -the kernel, and a subsystem is a set of test suites which test different parts
> > -of the same kernel subsystem or driver.
> > +To make tests easy to find, they are grouped into suites and subsystems. A test
> > +suite is a group of tests which test a related area of the kernel. A subsystem
> > +is a set of test suites which test different parts of a kernel subsystem
> > +or a driver.
> >
> >  Subsystems
> >  ----------
> >
> >  Every test suite must belong to a subsystem. A subsystem is a collection of one
> >  or more KUnit test suites which test the same driver or part of the kernel. A
> > -rule of thumb is that a test subsystem should match a single kernel module. If
> > -the code being tested can't be compiled as a module, in many cases the subsystem
> > -should correspond to a directory in the source tree or an entry in the
> > -MAINTAINERS file. If unsure, follow the conventions set by tests in similar
> > -areas.
> > +test subsystem should match a single kernel module. If the code being tested
> > +cannot be compiled as a module, in many cases the subsystem should correspond to
> > +a directory in the source tree or an entry in the ``MAINTAINERS`` file. If
> > +unsure, follow the conventions set by tests in similar areas.
> >
> >  Test subsystems should be named after the code being tested, either after the
> >  module (wherever possible), or after the directory or files being tested. Test
> > @@ -42,9 +41,8 @@ subsystems should be named to avoid ambiguity where necessary.
> >
> >  If a test subsystem name has multiple components, they should be separated by
> >  underscores. *Do not* include "test" or "kunit" directly in the subsystem name
> > -unless you are actually testing other tests or the kunit framework itself.
> > -
> > -Example subsystems could be:
> > +unless we are actually testing other tests or the kunit framework itself. For
> > +example, subsystems could be called:
> >
> >  ``ext4``
> >    Matches the module and filesystem name.
> > @@ -56,13 +54,13 @@ Example subsystems could be:
> >    Has several components (``snd``, ``hda``, ``codec``, ``hdmi``) separated by
> >    underscores. Matches the module name.
> >
> > -Avoid names like these:
> > +Avoid names as shown in examples below:
> >
> >  ``linear-ranges``
> >    Names should use underscores, not dashes, to separate words. Prefer
> >    ``linear_ranges``.
> >  ``qos-kunit-test``
> > -  As well as using underscores, this name should not have "kunit-test" as a
> > +  This name should not use underscores, not have "kunit-test" as a
>
> This contradicts the preceding sentence.  I believe you have changed the sense
> of the recommendation.
>
> This name should not use underscores, not have ->
>    This name should use underscores, and not have
>

Done

> >    suffix, and ``qos`` is ambiguous as a subsystem name. ``power_qos`` would be a
>
> suffix, and ``qos`` -> suffix.  Also ``qos``
>
> (The way this sentence was originally structured was quite awkward.  I think it's
> better to split into two sentences)
>

Done

> >    better name.
> >  ``pc_parallel_port``
> > @@ -70,34 +68,32 @@ Avoid names like these:
> >    be named ``parport_pc``.
> >
> >  .. note::
> > -        The KUnit API and tools do not explicitly know about subsystems. They're
> > -        simply a way of categorising test suites and naming modules which
> > -        provides a simple, consistent way for humans to find and run tests. This
> > -        may change in the future, though.
> > +        The KUnit API and tools do not explicitly know about subsystems. They are
> > +        a way of categorising test suites and naming modules which provides a
> > +        simple, consistent way for humans to find and run tests. This may change
> > +        in the future.
> >
> >  Suites
> >  ------
> >
> >  KUnit tests are grouped into test suites, which cover a specific area of
> >  functionality being tested. Test suites can have shared initialisation and
>
> 'initialization' seems to be preferred to 'initialisation' in most other
> kernel documentation.  (557 instances of 'initialization' to 58 of 'initialisation')
>
> (I know this isn't part of your patch, but since this is a cleanup and consistency
> patch, maybe change this as well?)
>

Done

> > -shutdown code which is run for all tests in the suite.
> > -Not all subsystems will need to be split into multiple test suites (e.g. simple drivers).
> > +shutdown code which is run for all tests in the suite. Not all subsystems need
> > +to be split into multiple test suites (for example, simple drivers).
> >
> >  Test suites are named after the subsystem they are part of. If a subsystem
> >  contains several suites, the specific area under test should be appended to the
> >  subsystem name, separated by an underscore.
> >
> >  In the event that there are multiple types of test using KUnit within a
> > -subsystem (e.g., both unit tests and integration tests), they should be put into
> > -separate suites, with the type of test as the last element in the suite name.
> > -Unless these tests are actually present, avoid using ``_test``, ``_unittest`` or
> > -similar in the suite name.
> > +subsystem (for example, both unit tests and integration tests), they should be
> > +put into separate suites, with the type of test as the last element in the suite
> > +name. Unless these tests are actually present, avoid using ``_test``, ``_unittest``
> > +or similar in the suite name.
> >
> >  The full test suite name (including the subsystem name) should be specified as
> >  the ``.name`` member of the ``kunit_suite`` struct, and forms the base for the
> > -module name (see below).
> > -
> > -Example test suites could include:
> > +module name. For example, test suites could include:
> >
> >  ``ext4_inode``
> >    Part of the ``ext4`` subsystem, testing the ``inode`` area.
> > @@ -109,26 +105,27 @@ Example test suites could include:
> >    The ``kasan`` subsystem has only one suite, so the suite name is the same as
> >    the subsystem name.
> >
> > -Avoid names like:
> > +Avoid names, for example:
> >
> >  ``ext4_ext4_inode``
> > -  There's no reason to state the subsystem twice.
> > +  There is no reason to state the subsystem twice.
> >  ``property_entry``
> >    The suite name is ambiguous without the subsystem name.
> >  ``kasan_integration_test``
> >    Because there is only one suite in the ``kasan`` subsystem, the suite should
> > -  just be called ``kasan``. There's no need to redundantly add
> > -  ``integration_test``. Should a separate test suite with, for example, unit
> > -  tests be added, then that suite could be named ``kasan_unittest`` or similar.
> > +  just be called as ``kasan``. Do not redundantly add
> > +  ``integration_test``. It should be a separate test suite. For example, if the
> > +  unit tests are added, then that suite could be named as ``kasan_unittest`` or
> > +  similar.
> >
> >  Test Cases
> >  ----------
> >
> >  Individual tests consist of a single function which tests a constrained
> > -codepath, property, or function. In the test output, individual tests' results
> > -will show up as subtests of the suite's results.
> > +codepath, property, or function. In the test output, an individual test's
> > +results will show up as subtests of the suite's results.
> >
> > -Tests should be named after what they're testing. This is often the name of the
> > +Tests should be named after what they are testing. This is often the name of the
> >  function being tested, with a description of the input or codepath being tested.
> >  As tests are C functions, they should be named and written in accordance with
> >  the kernel coding style.
> > @@ -136,7 +133,7 @@ the kernel coding style.
> >  .. note::
> >          As tests are themselves functions, their names cannot conflict with
> >          other C identifiers in the kernel. This may require some creative
> > -        naming. It's a good idea to make your test functions `static` to avoid
> > +        naming. It is a good idea to make your test functions `static` to avoid
> >          polluting the global namespace.
> >
> >  Example test names include:
> > @@ -150,7 +147,7 @@ Example test names include:
> >
> >  Should it be necessary to refer to a test outside the context of its test suite,
> >  the *fully-qualified* name of a test should be the suite name followed by the
> > -test name, separated by a colon (i.e. ``suite:test``).
> > +test name, separated by a colon (``suite:test``).
>
> Please leave the 'i.e.'
>

Done

> >
> >  Test Kconfig Entries
> >  ====================
> > @@ -162,16 +159,16 @@ This Kconfig entry must:
> >  * be named ``CONFIG_<name>_KUNIT_TEST``: where <name> is the name of the test
> >    suite.
> >  * be listed either alongside the config entries for the driver/subsystem being
> > -  tested, or be under [Kernel Hacking]→[Kernel Testing and Coverage]
> > -* depend on ``CONFIG_KUNIT``
> > +  tested, or be under [Kernel Hacking]->[Kernel Testing and Coverage]
> > +* depend on ``CONFIG_KUNIT``.
> >  * be visible only if ``CONFIG_KUNIT_ALL_TESTS`` is not enabled.
> >  * have a default value of ``CONFIG_KUNIT_ALL_TESTS``.
> > -* have a brief description of KUnit in the help text
> > +* have a brief description of KUnit in the help text.
> >
> > -Unless there's a specific reason not to (e.g. the test is unable to be built as
> > -a module), Kconfig entries for tests should be tristate.
> > +If we are not able to meet above conditions (for example, the test is unable to
> > +be built as a module), Kconfig entries for tests should be tristate.
> >
> > -An example Kconfig entry:
> > +For example, a Kconfig entry might look like:
> >
> >  .. code-block:: none
> >
> > @@ -182,8 +179,8 @@ An example Kconfig entry:
> >               help
> >                 This builds unit tests for foo.
> >
> > -               For more information on KUnit and unit tests in general, please refer
> > -               to the KUnit documentation in Documentation/dev-tools/kunit/.
> > +               For more information on KUnit and unit tests in general,
> > +               please refer to the KUnit documentation in Documentation/dev-tools/kunit/.
> >
> >                 If unsure, say N.
> >
> > --
> > 2.34.1.400.ga245620fadb-goog
>
> Thanks for the cleanups.
>  -- Tim
>

Regards,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture
  2021-12-07 17:24   ` Tim.Bird
@ 2021-12-10  5:31     ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-10  5:31 UTC (permalink / raw)
  To: tim.bird
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

Thanks for your review.

See my comments below.

On Tue, Dec 7, 2021 at 10:54 PM <Tim.Bird@sony.com> wrote:
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > Describe the components of KUnit and how the kernel mode parts
> > interact with kunit_tool.
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
> >  .../dev-tools/kunit/architecture.rst          | 206 ++++++++++++++++++
> >  Documentation/dev-tools/kunit/index.rst       |   2 +
> >  .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
> >  Documentation/dev-tools/kunit/start.rst       |   1 +
> >  4 files changed, 209 insertions(+)
> >  create mode 100644 Documentation/dev-tools/kunit/architecture.rst
> >  create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
> >
> > diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
> > new file mode 100644
> > index 000000000000..bb0fb3e3ed01
> > --- /dev/null
> > +++ b/Documentation/dev-tools/kunit/architecture.rst
> > @@ -0,0 +1,206 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +==================
> > +KUnit Architecture
> > +==================
> > +
> > +The KUnit architecture can be divided into two parts:
> > +
> > +- Kernel testing library
> > +- kunit_tool (Command line test harness)
> > +
> > +In-Kernel Testing Framework
> > +===========================
> > +
> > +The kernel testing library supports KUnit tests written in C using
> > +KUnit. KUnit tests are kernel code. KUnit does several things:
> > +
> > +- Organizes tests
> > +- Reports test results
> > +- Provides test utilities
> > +
> > +Test Cases
> > +----------
> > +
> > +The fundamental unit in KUnit is the test case. The KUnit test cases are
> > +grouped into KUnit suites. A KUnit test case is a function with type
> > +signature ``void (*)(struct kunit *test)``.
> > +These test case functions are wrapped in a struct called
> > +``struct kunit_case``. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/include/kunit/test.h#L145
> > +
> > +It includes:
> > +
> > +- ``run_case``: the function implementing the actual test case.
> > +- ``name``: the test case name.
> > +- ``generate_params``: the parameterized tests generator function. This
> > +  is optional for non-parameterized tests.
> > +
> > +Each KUnit test case gets a ``struct kunit`` context
> > +object passed to it that tracks a running test. The KUnit assertion
> > +macros and other KUnit utilities use the ``struct kunit`` context
> > +object. As an exception, there are two fields:
> > +
> > +- ``->priv``: The setup functions can use it to store arbitrary test
> > +  user data.
> > +
> > +- ``->param_value``: It contains the parameter value which can be
> > +  retrieved in the parameterized tests.
> > +
> > +Test Suites
> > +-----------
> > +
> > +A KUnit suite includes a collection of test cases. The KUnit suites
> > +are represented by the ``struct kunit_suite``. For example:
> > +
> > +.. code-block:: c
> > +
> > +     static struct kunit_case example_test_cases[] = {
> > +             KUNIT_CASE(example_test_foo),
> > +             KUNIT_CASE(example_test_bar),
> > +             KUNIT_CASE(example_test_baz),
> > +             {}
> > +     };
> > +
> > +     static struct kunit_suite example_test_suite = {
> > +             .name = "example",
> > +             .init = example_test_init,
> > +             .exit = example_test_exit,
> > +             .test_cases = example_test_cases,
> > +     };
> > +     kunit_test_suite(example_test_suite);
> > +
> > +In the above example, the test suite ``example_test_suite``, runs the
> > +test cases ``example_test_foo``, ``example_test_bar``, and
> > +``example_test_baz``. Before running the test, the ``example_test_init``
> > +is called and after running the test, ``example_test_exit`` is called.
> > +The ``kunit_test_suite(example_test_suite)`` registers the test suite
> > +with the KUnit test framework.
> > +
> > +Executor
> > +--------
> > +
> > +The KUnit executor can list and run built-in KUnit tests on boot.
> > +The Test suites are stored in a linker section
> > +called ``.kunit_test_suites``. For code, see:
> > +https://elixir.bootlin.com/linux/v5.12/source/include/asm-generic/vmlinux.lds.h#L918.
> > +The linker section consists of an array of pointers to
> > +``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
> > +macro. To run all tests compiled into the kernel, the KUnit executor
> > +iterates over the linker section array.
> > +
> > +.. kernel-figure:: kunit_suitememorydiagram.png
> > +     :alt:   KUnit Suite Memory
> > +
> > +     KUnit Suite Memory Diagram
> > +
> > +On the kernel boot, the KUnit executor uses the start and end addresses
> > +of this section to iterate over and run all tests. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/lib/kunit/executor.c
> > +
> > +When built as a module, the ``kunit_test_suites()`` macro defines a
> > +``module_init()`` function, which runs all the tests in the compilation
> > +unit instead of utilizing the executor.
> > +
> > +In KUnit tests, some error classes do not affect other tests
> > +or parts of the kernel, each KUnit case executes in a separate thread
> > +context. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/lib/kunit/try-catch.c#L58
> > +
> > +Assertion Macros
> > +----------------
> > +
> > +KUnit tests verify state using expectations/assertions.
> > +All expectations/assertions are formatted as:
> > +``KUNIT_{EXPECT|ASSERT}_<op>[_MSG](kunit, property[, message])``
> > +
> > +- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
> > +  expectation.
> > +
> > +     - For an expectation, if the check fails, marks the test as failed
> > +       and logs the failure.
> > +
> > +     - An assertion, on failure, causes the test case to terminate
> > +       immediately.
> > +
> > +             - Assertions call function:
> > +               ``void __noreturn kunit_abort(struct kunit *)``.
> > +
> > +             - ``kunit_abort`` calls function:
> > +               ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
> > +
> > +             - ``kunit_try_catch_throw`` calls function:
> > +               ``void complete_and_exit(struct completion *, long) __noreturn;``
> > +               and terminates the special thread context.
> > +
> > +- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
> > +  has the boolean value “true”), ``EQ`` (two supplied properties are
> > +  equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
> > +  contain an “err” value).
> > +
> > +- ``[_MSG]`` prints a custom message on failure.
> > +
> > +Test Result Reporting
> > +---------------------
> > +KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
> > +https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
> > +KTAP (yet to be standardized format) works with KUnit and Kselftest.
> > +The KUnit executor prints KTAP results to dmesg, and debugfs
> > +(if configured).
> > +
> > +Parameterized Tests
> > +-------------------
> > +
> > +Each KUnit parameterized test is associated with a collection of
> > +parameters. The test is invoked multiple times, once for each parameter
> > +value and the parameter is stored in the ``param_value`` field.
> > +The test case includes a ``KUNIT_CASE_PARAM()`` macro that accepts a
> > +generator function.
> > +The generator function returns the next parameter given to the
>
> given to the -> given the
>

Reworded the sentence as "The generator function is passed the
previous parameter and returns the next
parameter".

> > +previous parameter in parameterized tests. It also provides a macro to
> > +generate common-case generators based on arrays.
> > +
> > +For code, see:
> > +https://elixir.bootlin.com/linux/v5.12/source/include/kunit/test.h#L1783
>
> The rest looks OK, as far as I can tell.
>  -- Tim
>

Regards,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests
  2021-12-07 17:33   ` Tim.Bird
@ 2021-12-10  5:31     ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-10  5:31 UTC (permalink / raw)
  To: tim.bird
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

Thanks for your comments.

See my comments below.

On Tue, Dec 7, 2021 at 11:03 PM <Tim.Bird@sony.com> wrote:
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > Consolidate documentation running tests into two pages: "run tests with
> > kunit_tool" and "run tests without kunit_tool".
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
> >  Documentation/dev-tools/kunit/index.rst       |   4 +
> >  Documentation/dev-tools/kunit/run_manual.rst  |  57 ++++
> >  Documentation/dev-tools/kunit/run_wrapper.rst | 247 ++++++++++++++++++
> >  Documentation/dev-tools/kunit/start.rst       |   4 +-
> >  4 files changed, 311 insertions(+), 1 deletion(-)
> >  create mode 100644 Documentation/dev-tools/kunit/run_manual.rst
> >  create mode 100644 Documentation/dev-tools/kunit/run_wrapper.rst
> >
> > diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> > index 75e4ae85adbb..c0d1fd749cd2 100644
> > --- a/Documentation/dev-tools/kunit/index.rst
> > +++ b/Documentation/dev-tools/kunit/index.rst
> > @@ -10,6 +10,8 @@ KUnit - Linux Kernel Unit Testing
> >
> >       start
> >       architecture
> > +     run_wrapper
> > +     run_manual
> >       usage
> >       kunit-tool
> >       api/index
> > @@ -98,6 +100,8 @@ How do I use it?
> >
> >  *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
> >  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> > +*   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
> > +*   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
> >  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> >  *   Documentation/dev-tools/kunit/tips.rst - best practices with
> >      examples.
> > diff --git a/Documentation/dev-tools/kunit/run_manual.rst b/Documentation/dev-tools/kunit/run_manual.rst
> > new file mode 100644
> > index 000000000000..71e6d6623f88
> > --- /dev/null
> > +++ b/Documentation/dev-tools/kunit/run_manual.rst
> > @@ -0,0 +1,57 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +============================
> > +Run Tests without kunit_tool
> > +============================
> > +
> > +If we do not want to use kunit_tool (For example: we want to integrate
> > +with other systems, or run tests on real hardware), we can
> > +include KUnit in any kernel, read out results, and parse manually.
> > +
> > +.. note:: KUnit is not designed for use in a production system. It is
> > +          possible that tests may reduce the stability or security of
> > +          the system.
> > +
> > +Configure the Kernel
> > +====================
> > +
> > +KUnit tests can run without kunit_tool. This can be useful, if:
> > +
> > +- We have an existing kernel configuration to test.
> > +- Need to run on real hardware (or using an emulator/VM kunit_tool
> > +  does not support).
> > +- Wish to integrate with some existing testing systems.
> > +
> > +KUnit is configured with the ``CONFIG_KUNIT`` option, and individual
> > +tests can also be built by enabling their config options in our
> > +``.config``. KUnit tests usually (but don't always) have config options
> > +ending in ``_KUNIT_TEST``. Most tests can either be built as a module,
> > +or be built into the kernel.
> > +
> > +.. note ::
> > +
> > +     We can enable the ``KUNIT_ALL_TESTS`` config option to
> > +     automatically enable all tests with satisfied dependencies. This is
> > +     a good way of quickly testing everything applicable to the current
> > +     config.
> > +
> > +Once we have built our kernel (and/or modules), it is simple to run
> > +the tests. If the tests are built-in, then will run automatically on the
>
> then will run -> they will run
> (or 'then they will run')
>

Done

> > +kernel boot. The results will be written to the kernel log (``dmesg``)
> > +in TAP format.
> > +
>
> The rest looks OK to me.
>
> You can add a 'Reviewed-by' for me if you want.
>  -- Tim
>

Regards,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests
  2021-12-07 18:28   ` Tim.Bird
@ 2021-12-10  5:31     ` Harinder Singh
  2021-12-10 17:16       ` Tim.Bird
  0 siblings, 1 reply; 22+ messages in thread
From: Harinder Singh @ 2021-12-10  5:31 UTC (permalink / raw)
  To: tim.bird
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

Thanks for providing review comments.

Please see my comments below.

On Tue, Dec 7, 2021 at 11:58 PM <Tim.Bird@sony.com> wrote:
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > We now have dedicated pages on running tests. Therefore refocus the
> > usage page on writing tests and add content from tips page and
> > information on other architectures.
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
> >  Documentation/dev-tools/kunit/index.rst |   2 +-
> >  Documentation/dev-tools/kunit/start.rst |   2 +-
> >  Documentation/dev-tools/kunit/usage.rst | 570 ++++++++++--------------
> >  3 files changed, 247 insertions(+), 327 deletions(-)
> >
> > diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> > index c0d1fd749cd2..76c9704d6a1a 100644
> > --- a/Documentation/dev-tools/kunit/index.rst
> > +++ b/Documentation/dev-tools/kunit/index.rst
> > @@ -102,7 +102,7 @@ How do I use it?
> >  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> >  *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
> >  *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
> > -*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> > +*   Documentation/dev-tools/kunit/usage.rst - write tests.
> >  *   Documentation/dev-tools/kunit/tips.rst - best practices with
> >      examples.
> >  *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> > diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
> > index af13f443c976..a858ab009944 100644
> > --- a/Documentation/dev-tools/kunit/start.rst
> > +++ b/Documentation/dev-tools/kunit/start.rst
> > @@ -243,7 +243,7 @@ Next Steps
> >  *   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> >  *   Documentation/dev-tools/kunit/run_wrapper.rst - run kunit_tool.
> >  *   Documentation/dev-tools/kunit/run_manual.rst - run tests without kunit_tool.
> > -*   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> > +*   Documentation/dev-tools/kunit/usage.rst - write tests.
> >  *   Documentation/dev-tools/kunit/tips.rst - best practices with
> >      examples.
> >  *   Documentation/dev-tools/kunit/api/index.rst - KUnit APIs
> > diff --git a/Documentation/dev-tools/kunit/usage.rst b/Documentation/dev-tools/kunit/usage.rst
> > index 63f1bb89ebf5..b321877797f0 100644
> > --- a/Documentation/dev-tools/kunit/usage.rst
> > +++ b/Documentation/dev-tools/kunit/usage.rst
> > @@ -1,57 +1,13 @@
> >  .. SPDX-License-Identifier: GPL-2.0
> >
> > -===========
> > -Using KUnit
> > -===========
> > -
> > -The purpose of this document is to describe what KUnit is, how it works, how it
> > -is intended to be used, and all the concepts and terminology that are needed to
> > -understand it. This guide assumes a working knowledge of the Linux kernel and
> > -some basic knowledge of testing.
> > -
> > -For a high level introduction to KUnit, including setting up KUnit for your
> > -project, see Documentation/dev-tools/kunit/start.rst.
> > -
> > -Organization of this document
> > -=============================
> > -
> > -This document is organized into two main sections: Testing and Common Patterns.
> > -The first covers what unit tests are and how to use KUnit to write them. The
> > -second covers common testing patterns, e.g. how to isolate code and make it
> > -possible to unit test code that was otherwise un-unit-testable.
> > -
> > -Testing
> > -=======
> > -
> > -What is KUnit?
> > ---------------
> > -
> > -"K" is short for "kernel" so "KUnit" is the "(Linux) Kernel Unit Testing
> > -Framework." KUnit is intended first and foremost for writing unit tests; it is
> > -general enough that it can be used to write integration tests; however, this is
> > -a secondary goal. KUnit has no ambition of being the only testing framework for
> > -the kernel; for example, it does not intend to be an end-to-end testing
> > -framework.
> > -
> > -What is Unit Testing?
> > ----------------------
> > -
> > -A `unit test <https://martinfowler.com/bliki/UnitTest.html>`_ is a test that
> > -tests code at the smallest possible scope, a *unit* of code. In the C
> > -programming language that's a function.
> > -
> > -Unit tests should be written for all the publicly exposed functions in a
> > -compilation unit; so that is all the functions that are exported in either a
> > -*class* (defined below) or all functions which are **not** static.
> > -
> >  Writing Tests
> > --------------
> > +=============
> >
> >  Test Cases
> > -~~~~~~~~~~
> > +----------
> >
> >  The fundamental unit in KUnit is the test case. A test case is a function with
> > -the signature ``void (*)(struct kunit *test)``. It calls a function to be tested
> > +the signature ``void (*)(struct kunit *test)``. It calls the function under test
> >  and then sets *expectations* for what should happen. For example:
> >
> >  .. code-block:: c
> > @@ -65,18 +21,19 @@ and then sets *expectations* for what should happen. For example:
> >               KUNIT_FAIL(test, "This test never passes.");
> >       }
> >
> > -In the above example ``example_test_success`` always passes because it does
> > -nothing; no expectations are set, so all expectations pass. On the other hand
> > -``example_test_failure`` always fails because it calls ``KUNIT_FAIL``, which is
> > -a special expectation that logs a message and causes the test case to fail.
> > +In the above example, ``example_test_success`` always passes because it does
> > +nothing; no expectations are set, and therefore all expectations pass. On the
> > +other hand ``example_test_failure`` always fails because it calls ``KUNIT_FAIL``,
> > +which is a special expectation that logs a message and causes the test case to
> > +fail.
> >
> >  Expectations
> >  ~~~~~~~~~~~~
> > -An *expectation* is a way to specify that you expect a piece of code to do
> > -something in a test. An expectation is called like a function. A test is made
> > -by setting expectations about the behavior of a piece of code under test; when
> > -one or more of the expectations fail, the test case fails and information about
> > -the failure is logged. For example:
> > +An *expectation* specifies that we expect a piece of code to do something in a
> > +test. An expectation is called like a function. A test is made by setting
> > +expectations about the behavior of a piece of code under test. When one or more
> > +expectations fail, the test case fails and information about the failure is
> > +logged. For example:
> >
> >  .. code-block:: c
> >
> > @@ -86,29 +43,28 @@ the failure is logged. For example:
> >               KUNIT_EXPECT_EQ(test, 2, add(1, 1));
> >       }
> >
> > -In the above example ``add_test_basic`` makes a number of assertions about the
> > -behavior of a function called ``add``; the first parameter is always of type
> > -``struct kunit *``, which contains information about the current test context;
> > -the second parameter, in this case, is what the value is expected to be; the
> > +In the above example, ``add_test_basic`` makes a number of assertions about the
> > +behavior of a function called ``add``. The first parameter is always of type
> > +``struct kunit *``, which contains information about the current test context.
> > +The second parameter, in this case, is what the value is expected to be. The
> >  last value is what the value actually is. If ``add`` passes all of these
> >  expectations, the test case, ``add_test_basic`` will pass; if any one of these
> >  expectations fails, the test case will fail.
> >
> > -It is important to understand that a test case *fails* when any expectation is
> > -violated; however, the test will continue running, potentially trying other
> > -expectations until the test case ends or is otherwise terminated. This is as
> > -opposed to *assertions* which are discussed later.
> > +A test case *fails* when any expectation is violated; however, the test will
> > +continue to run, and try other expectations until the test case ends or is
> > +otherwise terminated. This is as opposed to *assertions* which are discussed
> > +later.
> >
> > -To learn about more expectations supported by KUnit, see
> > -Documentation/dev-tools/kunit/api/test.rst.
> > +To learn about more KUnit expectations, see Documentation/dev-tools/kunit/api/test.rst.
> >
> >  .. note::
> > -   A single test case should be pretty short, pretty easy to understand,
> > -   focused on a single behavior.
> > +   A single test case should be short, easy to understand, and focused on a
> > +   single behavior.
> >
> > -For example, if we wanted to properly test the add function above, we would
> > -create additional tests cases which would each test a different property that an
> > -add function should have like this:
> > +For example, if we want to rigorously test the ``add`` function above, create
> > +additional tests cases which would test each property that an ``add`` function
> > +should have as shown below:
> >
> >  .. code-block:: c
> >
> > @@ -134,56 +90,43 @@ add function should have like this:
> >               KUNIT_EXPECT_EQ(test, INT_MIN, add(INT_MAX, 1));
> >       }
> >
> > -Notice how it is immediately obvious what all the properties that we are testing
> > -for are.
> > -
> >  Assertions
> >  ~~~~~~~~~~
> >
> > -KUnit also has the concept of an *assertion*. An assertion is just like an
> > -expectation except the assertion immediately terminates the test case if it is
> > -not satisfied.
> > -
> > -For example:
> > +An assertion is like an expectation, except that the assertion immediately
> > +terminates the test case if the condition is not satisfied. For example:
> >
> >  .. code-block:: c
> >
> > -     static void mock_test_do_expect_default_return(struct kunit *test)
> > +     static void test_sort(struct kunit *test)
> >       {
> > -             struct mock_test_context *ctx = test->priv;
> > -             struct mock *mock = ctx->mock;
> > -             int param0 = 5, param1 = -5;
> > -             const char *two_param_types[] = {"int", "int"};
> > -             const void *two_params[] = {&param0, &param1};
> > -             const void *ret;
> > -
> > -             ret = mock->do_expect(mock,
> > -                                   "test_printk", test_printk,
> > -                                   two_param_types, two_params,
> > -                                   ARRAY_SIZE(two_params));
> > -             KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ret);
> > -             KUNIT_EXPECT_EQ(test, -4, *((int *) ret));
> > +             int *a, i, r = 1;
> > +             a = kunit_kmalloc_array(test, TEST_LEN, sizeof(*a), GFP_KERNEL);
> > +             KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a);
> > +             for (i = 0; i < TEST_LEN; i++) {
> > +                     r = (r * 725861) % 6599;
> > +                     a[i] = r;
> > +             }
> > +             sort(a, TEST_LEN, sizeof(*a), cmpint, NULL);
> > +             for (i = 0; i < TEST_LEN-1; i++)
> > +                     KUNIT_EXPECT_LE(test, a[i], a[i + 1]);
> >       }
> >
> > -In this example, the method under test should return a pointer to a value, so
> > -if the pointer returned by the method is null or an errno, we don't want to
> > -bother continuing the test since the following expectation could crash the test
> > -case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us to bail out of the test case if
> > -the appropriate conditions have not been satisfied to complete the test.
> > +In this example, the method under test should return pointer to a value. If the
> > +pointer returns null or an errno, we want to stop the test since the following
> > +expectation could crash the test case. `ASSERT_NOT_ERR_OR_NULL(...)` allows us
> > +to bail out of the test case if the appropriate conditions are not satisfied to
> > +complete the test.
> >
> >  Test Suites
> >  ~~~~~~~~~~~
> >
> > -Now obviously one unit test isn't very helpful; the power comes from having
> > -many test cases covering all of a unit's behaviors. Consequently it is common
> > -to have many *similar* tests; in order to reduce duplication in these closely
> > -related tests most unit testing frameworks - including KUnit - provide the
> > -concept of a *test suite*. A *test suite* is just a collection of test cases
> > -for a unit of code with a set up function that gets invoked before every test
> > -case and then a tear down function that gets invoked after every test case
> > -completes.
> > -
> > -Example:
> > +We need many test cases covering all the unit's behaviors. It is common to have
> > +many similar tests. In order to reduce duplication in these closely related
> > +tests, most unit testing frameworks (including KUnit) provide the concept of a
> > +*test suite*. A test suite is a collection of test cases for a unit of code
> > +with a setup function that gets invoked before every test case and then a tear
> > +down function that gets invoked after every test case completes. For example:
> >
> >  .. code-block:: c
> >
> > @@ -202,23 +145,48 @@ Example:
> >       };
> >       kunit_test_suite(example_test_suite);
> >
> > -In the above example the test suite, ``example_test_suite``, would run the test
> > -cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``;
> > -each would have ``example_test_init`` called immediately before it and would
> > -have ``example_test_exit`` called immediately after it.
> > +In the above example, the test suite ``example_test_suite`` would run the test
> > +cases ``example_test_foo``, ``example_test_bar``, and ``example_test_baz``. Each
> > +would have ``example_test_init`` called immediately before it and
> > +``example_test_exit`` called immediately after it.
> >  ``kunit_test_suite(example_test_suite)`` registers the test suite with the
> >  KUnit test framework.
> >
> >  .. note::
> > -   A test case will only be run if it is associated with a test suite.
> > +   A test case will only run if it is associated with a test suite.
> >
> > -``kunit_test_suite(...)`` is a macro which tells the linker to put the specified
> > -test suite in a special linker section so that it can be run by KUnit either
> > -after late_init, or when the test module is loaded (depending on whether the
> > -test was built in or not).
> > +``kunit_test_suite(...)`` is a macro which tells the linker to put the
> > +specified test suite in a special linker section so that it can be run by KUnit
> > +either after ``late_init``, or when the test module is loaded (if the test was
> > +built as a module).
> >
> > -For more information on these types of things see the
> > -Documentation/dev-tools/kunit/api/test.rst.
> > +For more information, see Documentation/dev-tools/kunit/api/test.rst.
> > +
> > +Writing Tests For Other Architectures
> > +-------------------------------------
> > +
> > +Always prefer tests that run on UML to tests that only run under a particular
> Always prefer tests -> It is better to write tests
>

Done

> > +architecture. In addition, prefer tests that run under QEMU or another easy
> prefer tests -> it is better to write tests
>

Done

> > +(and monetarily free) to obtain software environment to a specific piece of
> easy (and monetarily free) to obtain software environment ->
>   easy to obtain (and monetarily free) software environment
>
> (ie - you shouldn't split up 'easy to obtain')
>
> environment to a specific -> rather than tests that require a specific
>

Done

> > +hardware.
> > +
> > +Nevertheless, there are still valid reasons to write an architecture or
>
> an architecture or hardware specific test ->
>   a test that is architecture or hardware specific
>

Done

> > +hardware specific test. For example, we might want to test code that really
> > +belongs in ``arch/some-arch/*``. Even so, try to write the test so that it does
> > +not depend on physical hardware. Some of our test cases may not need hardware,
> > +only few tests actually require the hardware to test it. When hardware is not
> > +available, instead of disabling tests, we can skip them.
> > +
> > +Now that we have narrowed down exactly what bits are hardware specific, the
> > +actual procedure for writing and running the tests is same as writing normal
> > +KUnit tests.
> > +
> > +.. important::
> > +   We may have to reset hardware state. If this is not possible, we may only
> > +   be able to run one test case per invocation.
> > +
> > +.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
> > +   dependent KUnit test.
> >
> >  Common Patterns
> >  ===============
> > @@ -226,43 +194,39 @@ Common Patterns
> >  Isolating Behavior
> >  ------------------
> >
> > -The most important aspect of unit testing that other forms of testing do not
> > -provide is the ability to limit the amount of code under test to a single unit.
> > -In practice, this is only possible by being able to control what code gets run
> > -when the unit under test calls a function and this is usually accomplished
> > -through some sort of indirection where a function is exposed as part of an API
> > -such that the definition of that function can be changed without affecting the
> > -rest of the code base. In the kernel this primarily comes from two constructs,
> > -classes, structs that contain function pointers that are provided by the
> > -implementer, and architecture-specific functions which have definitions selected
> > -at compile time.
> > +Unit testing limits the amount of code under test to a single unit. It controls
> > +what code gets run when the unit under test calls a function. Where a function
> > +is exposed as part of an API such that the definition of that function can be
> > +changed without affecting the rest of the code base. In the kernel, this comes
> > +from two constructs: classes, structs. that contain function pointers provided
>
> ??? I couldn't parse this.
>
> classes, structs. that contain ->
>    classes, which are structs that contain
>

Done

> > +by the implementer and architecture specific functions which have definitions
>
> by the implementer and architecture specific functions which have ->
>   by the implementer, and architecture-specific functions, which have
>

Done

> > +selected at compile time.
>
> I'm not sure if the second comma is needed.  It depends on whether the clause
> 'which have definitions selected at compile time' is intended to describe the
> architecture-specific functions, or constrain them.
>

Done

> >
> >  Classes
> >  ~~~~~~~
> >
> >  Classes are not a construct that is built into the C programming language;
> > -however, it is an easily derived concept. Accordingly, pretty much every project
> > -that does not use a standardized object oriented library (like GNOME's GObject)
> > -has their own slightly different way of doing object oriented programming; the
> > -Linux kernel is no exception.
> > +however, it is an easily derived concept. Accordingly, in most cases, every
> > +project that does not use a standardized object oriented library (like GNOME's
> > +GObject) has their own slightly different way of doing object oriented
> > +programming; the Linux kernel is no exception.
> >
> >  The central concept in kernel object oriented programming is the class. In the
> >  kernel, a *class* is a struct that contains function pointers. This creates a
> >  contract between *implementers* and *users* since it forces them to use the
> > -same function signature without having to call the function directly. In order
> > -for it to truly be a class, the function pointers must specify that a pointer
> > -to the class, known as a *class handle*, be one of the parameters; this makes
> > -it possible for the member functions (also known as *methods*) to have access
> > -to member variables (more commonly known as *fields*) allowing the same
> > -implementation to have multiple *instances*.
> > -
> > -Typically a class can be *overridden* by *child classes* by embedding the
> > -*parent class* in the child class. Then when a method provided by the child
> > -class is called, the child implementation knows that the pointer passed to it is
> > -of a parent contained within the child; because of this, the child can compute
> > -the pointer to itself because the pointer to the parent is always a fixed offset
> > -from the pointer to the child; this offset is the offset of the parent contained
> > -in the child struct. For example:
> > +same function signature without having to call the function directly. To be a
> > +class, the function pointers must specify that a pointer to the class, known as
> > +a *class handle*, be one of the parameters. Thus the member functions (also
> > +known as *methods*) have access to member variables (also known as *fields*)
> > +allowing the same implementation to have multiple *instances*.
> > +
> > +A class can be *overridden* by *child classes* by embedding the *parent class*
> > +in the child class. Then when the child class *method* is called, the child
> > +implementation knows that the pointer passed to it is of a parent contained
> > +within the child. Thus, the child can compute the pointer to itself because the
> > +pointer to the parent is always a fixed offset from the pointer to the child.
> > +This offset is the offset of the parent contained in the child struct. For
> > +example:
> >
> >  .. code-block:: c
> >
> > @@ -290,8 +254,8 @@ in the child struct. For example:
> >               self->width = width;
> >       }
> >
> > -In this example (as in most kernel code) the operation of computing the pointer
> > -to the child from the pointer to the parent is done by ``container_of``.
> > +In this example, computing the pointer to the child from the pointer to the
> > +parent is done by ``container_of``.
> >
> >  Faking Classes
> >  ~~~~~~~~~~~~~~
> > @@ -300,14 +264,11 @@ In order to unit test a piece of code that calls a method in a class, the
> >  behavior of the method must be controllable, otherwise the test ceases to be a
> >  unit test and becomes an integration test.
> >
> > -A fake just provides an implementation of a piece of code that is different than
> > -what runs in a production instance, but behaves identically from the standpoint
> > -of the callers; this is usually done to replace a dependency that is hard to
> > -deal with, or is slow.
> > -
> > -A good example for this might be implementing a fake EEPROM that just stores the
> > -"contents" in an internal buffer. For example, let's assume we have a class that
> > -represents an EEPROM:
> > +A fake class implements a piece of code that is different than what runs in a
> > +production instance, but behaves identical from the standpoint of the callers.
> > +This is done to replace a dependency that is hard to deal with, or is slow. For
> > +example, implementing a fake EEPROM that stores the "contents" in an
> > +internal buffer. Assume we have a class that represents an EEPROM:
> >
> >  .. code-block:: c
> >
> > @@ -316,7 +277,7 @@ represents an EEPROM:
> >               ssize_t (*write)(struct eeprom *this, size_t offset, const char *buffer, size_t count);
> >       };
> >
> > -And we want to test some code that buffers writes to the EEPROM:
> > +We want to test code that buffers writes to the EEPROM:
>
> We -> And we
>
> (Please leave the 'and')
>

Done

> >
> >  .. code-block:: c
> >
> > @@ -329,7 +290,7 @@ And we want to test some code that buffers writes to the EEPROM:
> >       struct eeprom_buffer *new_eeprom_buffer(struct eeprom *eeprom);
> >       void destroy_eeprom_buffer(struct eeprom *eeprom);
> >
> > -We can easily test this code by *faking out* the underlying EEPROM:
> > +We can test this code by *faking out* the underlying EEPROM:
> >
> >  .. code-block:: c
> >
> > @@ -456,14 +417,14 @@ We can now use it to test ``struct eeprom_buffer``:
> >               destroy_eeprom_buffer(ctx->eeprom_buffer);
> >       }
> >
> > -Testing against multiple inputs
> > +Testing Against Multiple Inputs
> >  -------------------------------
> >
> > -Testing just a few inputs might not be enough to have confidence that the code
> > -works correctly, e.g. for a hash function.
> > +Testing just a few inputs is not enough to ensure that the code works correctly,
> > +for example: testing a hash function.
> >
> > -In such cases, it can be helpful to have a helper macro or function, e.g. this
> > -fictitious example for ``sha1sum(1)``
> > +We can write a helper macro or function. The function is called for each input.
> > +For example, to test ``sha1sum(1)``, we can write:
> >
> >  .. code-block:: c
> >
> > @@ -475,16 +436,15 @@ fictitious example for ``sha1sum(1)``
> >       TEST_SHA1("hello world",  "2aae6c35c94fcfb415dbe95f408b9ce91ee846ed");
> >       TEST_SHA1("hello world!", "430ce34d020724ed75a196dfc2ad67c77772d169");
> >
> > +Note the use of the ``_MSG`` version of ``KUNIT_EXPECT_STREQ`` to print a more
> > +detailed error and make the assertions clearer within the helper macros.
> >
> > -Note the use of ``KUNIT_EXPECT_STREQ_MSG`` to give more context when it fails
> > -and make it easier to track down. (Yes, in this example, ``want`` is likely
> > -going to be unique enough on its own).
> > +The ``_MSG`` variants are useful when the same expectation is called multiple
> > +times (in a loop or helper function) and thus the line number is not enough to
> > +identify what failed, as shown below.
> >
> > -The ``_MSG`` variants are even more useful when the same expectation is called
> > -multiple times (in a loop or helper function) and thus the line number isn't
> > -enough to identify what failed, like below.
> > -
> > -In some cases, it can be helpful to write a *table-driven test* instead, e.g.
> > +In complicated cases, we recommend using a *table-driven test* compared to the
> > +helper macro variation, for example:
> >
> >  .. code-block:: c
> >
> > @@ -513,17 +473,18 @@ In some cases, it can be helpful to write a *table-driven test* instead, e.g.
> >       }
> >
> >
> > -There's more boilerplate involved, but it can:
> > +There is more boilerplate code involved, but it can:
> > +
> > +* be more readable when there are multiple inputs/outputs (due to field names).
> >
> > -* be more readable when there are multiple inputs/outputs thanks to field names,
> > +  * For example, see ``fs/ext4/inode-test.c``.
> >
> > -  * E.g. see ``fs/ext4/inode-test.c`` for an example of both.
> > -* reduce duplication if test cases can be shared across multiple tests.
> > +* reduce duplication if test cases are shared across multiple tests.
> >
> > -  * E.g. if we wanted to also test ``sha256sum``, we could add a ``sha256``
> > +  * For example: if we want to test ``sha256sum``, we could add a ``sha256``
> >      field and reuse ``cases``.
> >
> > -* be converted to a "parameterized test", see below.
> > +* be converted to a "parameterized test".
> >
> >  Parameterized Testing
> >  ~~~~~~~~~~~~~~~~~~~~~
> > @@ -531,7 +492,7 @@ Parameterized Testing
> >  The table-driven testing pattern is common enough that KUnit has special
> >  support for it.
> >
> > -Reusing the same ``cases`` array from above, we can write the test as a
> > +By reusing the same ``cases`` array from above, we can write the test as a
> >  "parameterized test" with the following.
> >
> >  .. code-block:: c
> > @@ -582,193 +543,152 @@ Reusing the same ``cases`` array from above, we can write the test as a
> >
> >  .. _kunit-on-non-uml:
> >
> > -KUnit on non-UML architectures
> > -==============================
> > -
> > -By default KUnit uses UML as a way to provide dependencies for code under test.
> > -Under most circumstances KUnit's usage of UML should be treated as an
> > -implementation detail of how KUnit works under the hood. Nevertheless, there
> > -are instances where being able to run architecture-specific code or test
> > -against real hardware is desirable. For these reasons KUnit supports running on
> > -other architectures.
> > -
> > -Running existing KUnit tests on non-UML architectures
> > ------------------------------------------------------
> > +Exiting Early on Failed Expectations
> > +------------------------------------
> >
> > -There are some special considerations when running existing KUnit tests on
> > -non-UML architectures:
> > +We can use ``KUNIT_EXPECT_EQ`` to mark the test as failed and continue
> > +execution.  In some cases, it is unsafe to continue. We can use the
> > +``KUNIT_ASSERT`` variant to exit on failure.
> >
> > -*   Hardware may not be deterministic, so a test that always passes or fails
> > -    when run under UML may not always do so on real hardware.
> > -*   Hardware and VM environments may not be hermetic. KUnit tries its best to
> > -    provide a hermetic environment to run tests; however, it cannot manage state
> > -    that it doesn't know about outside of the kernel. Consequently, tests that
> > -    may be hermetic on UML may not be hermetic on other architectures.
> > -*   Some features and tooling may not be supported outside of UML.
> > -*   Hardware and VMs are slower than UML.
> > +.. code-block:: c
> >
> > -None of these are reasons not to run your KUnit tests on real hardware; they are
> > -only things to be aware of when doing so.
> > +     void example_test_user_alloc_function(struct kunit *test)
> > +     {
> > +             void *object = alloc_some_object_for_me();
> >
> > -Currently, the KUnit Wrapper (``tools/testing/kunit/kunit.py``) (aka
> > -kunit_tool) only fully supports running tests inside of UML and QEMU; however,
> > -this is only due to our own time limitations as humans working on KUnit. It is
> > -entirely possible to support other emulators and even actual hardware, but for
> > -now QEMU and UML is what is fully supported within the KUnit Wrapper. Again, to
> > -be clear, this is just the Wrapper. The actualy KUnit tests and the KUnit
> > -library they are written in is fully architecture agnostic and can be used in
> > -virtually any setup, you just won't have the benefit of typing a single command
> > -out of the box and having everything magically work perfectly.
> > +             /* Make sure we got a valid pointer back. */
> > +             KUNIT_ASSERT_NOT_ERR_OR_NULL(test, object);
> > +             do_something_with_object(object);
> > +     }
> >
> > -Again, all core KUnit framework features are fully supported on all
> > -architectures, and using them is straightforward: Most popular architectures
> > -are supported directly in the KUnit Wrapper via QEMU. Currently, supported
> > -architectures on QEMU include:
> > +Allocating Memory
> > +-----------------
> >
> > -*   i386
> > -*   x86_64
> > -*   arm
> > -*   arm64
> > -*   alpha
> > -*   powerpc
> > -*   riscv
> > -*   s390
> > -*   sparc
> > +We can use ``kzalloc``, you should prefer ``kunit_kzalloc`` and KUnit will
>
> ???
>
> We can use ``kzalloc``, you should prefer ``kunit_kzalloc`` and KUnit will ->
>   Where you might use ``kzalloc``, you can instead use ``kunit_kzalloc`` and KUnit will
>
> > +ensure that the memory is freed once the test completes.
> >
> > -In order to run KUnit tests on one of these architectures via QEMU with the
> > -KUnit wrapper, all you need to do is specify the flags ``--arch`` and
> > -``--cross_compile`` when invoking the KUnit Wrapper. For example, we could run
> > -the default KUnit tests on ARM in the following manner (assuming we have an ARM
> > -toolchain installed):
> > +This is useful because it lets us use the ``KUNIT_ASSERT_EQ`` macros to exit
> > +early from a test without having to worry about remembering to call ``kfree``.
> > +For example:
> >
> > -.. code-block:: bash
> > +.. code-block:: c
> >
> > -     tools/testing/kunit/kunit.py run --timeout=60 --jobs=12 --arch=arm --cross_compile=arm-linux-gnueabihf-
> > +     void example_test_allocation(struct kunit *test)
> > +     {
> > +             char *buffer = kunit_kzalloc(test, 16, GFP_KERNEL);
> > +             /* Ensure allocation succeeded. */
> > +             KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
> >
> > -Alternatively, if you want to run your tests on real hardware or in some other
> > -emulation environment, all you need to do is to take your kunitconfig, your
> > -Kconfig options for the tests you would like to run, and merge them into
> > -whatever config your are using for your platform. That's it!
> > +             KUNIT_ASSERT_STREQ(test, buffer, "");
> > +     }
> >
> > -For example, let's say you have the following kunitconfig:
> >
> > -.. code-block:: none
> > +Testing Static Functions
> > +------------------------
> >
> > -     CONFIG_KUNIT=y
> > -     CONFIG_KUNIT_EXAMPLE_TEST=y
> > +If we do not want to expose functions or variables for testing, one option is to
> > +conditionally ``#include`` the test file at the end of your .c file. For
> > +example:
> >
> > -If you wanted to run this test on an x86 VM, you might add the following config
> > -options to your ``.config``:
> > +.. code-block:: c
> >
> > -.. code-block:: none
> > +     /* In my_file.c */
> >
> > -     CONFIG_KUNIT=y
> > -     CONFIG_KUNIT_EXAMPLE_TEST=y
> > -     CONFIG_SERIAL_8250=y
> > -     CONFIG_SERIAL_8250_CONSOLE=y
> > +     static int do_interesting_thing();
> >
> > -All these new options do is enable support for a common serial console needed
> > -for logging.
> > +     #ifdef CONFIG_MY_KUNIT_TEST
> > +     #include "my_kunit_test.c"
> > +     #endif
> >
> > -Next, you could build a kernel with these tests as follows:
> > +Injecting Test-Only Code
> > +------------------------
> >
> > +Similar to as shown above, we can add test-specific logic. For example:
> >
> > -.. code-block:: bash
> > +.. code-block:: c
> >
> > -     make ARCH=x86 olddefconfig
> > -     make ARCH=x86
> > +     /* In my_file.h */
> >
> > -Once you have built a kernel, you could run it on QEMU as follows:
> > +     #ifdef CONFIG_MY_KUNIT_TEST
> > +     /* Defined in my_kunit_test.c */
> > +     void test_only_hook(void);
> > +     #else
> > +     void test_only_hook(void) { }
> > +     #endif
> >
> > -.. code-block:: bash
> > +This test-only code can be made more useful by accessing the current ``kunit_test``
> > +as shown in next section: *Accessing The Current Test*.
> >
> > -     qemu-system-x86_64 -enable-kvm \
> > -                        -m 1024 \
> > -                        -kernel arch/x86_64/boot/bzImage \
> > -                        -append 'console=ttyS0' \
> > -                        --nographic
> > +Accessing The Current Test
> > +--------------------------
> >
> > -Interspersed in the kernel logs you might see the following:
> > +In some cases, we need to call test-only code from outside the test file.
> > +For example, see example in section *Injecting Test-Only Code* or if
> > +we are providing a fake implementation of an ops struct. Using
> > +``kunit_test`` field in ``task_struct``, we can access it via
> > +``current->kunit_test``.
> >
> > -.. code-block:: none
> > +Below example includes how to implement "mocking":
>
> Below example -> The example below
>
> >
> > -     TAP version 14
> > -             # Subtest: example
> > -             1..1
> > -             # example_simple_test: initializing
> > -             ok 1 - example_simple_test
> > -     ok 1 - example
> > +.. code-block:: c
> >
> > -Congratulations, you just ran a KUnit test on the x86 architecture!
> > +     #include <linux/sched.h> /* for current */
> >
> > -In a similar manner, kunit and kunit tests can also be built as modules,
> > -so if you wanted to run tests in this way you might add the following config
> > -options to your ``.config``:
> > +     struct test_data {
> > +             int foo_result;
> > +             int want_foo_called_with;
> > +     };
> >
> > -.. code-block:: none
> > +     static int fake_foo(int arg)
> > +     {
> > +             struct kunit *test = current->kunit_test;
> > +             struct test_data *test_data = test->priv;
> >
> > -     CONFIG_KUNIT=m
> > -     CONFIG_KUNIT_EXAMPLE_TEST=m
> > +             KUNIT_EXPECT_EQ(test, test_data->want_foo_called_with, arg);
> > +             return test_data->foo_result;
> > +     }
> >
> > -Once the kernel is built and installed, a simple
> > +     static void example_simple_test(struct kunit *test)
> > +     {
> > +             /* Assume priv is allocated in the suite's .init */
> > +             struct test_data *test_data = test->priv;
>
> I found this description and example hard to follow.  This is possibly due
> to the patch being intermingled with the deletion of completely unrelated
> lines.
>

Reworked the description. Hope the new explanation is easier to follow.

> Does 'priv' stand for privilege, or private?  I assume the latter, but maybe mention
> the meaning of this?  Is 'priv' a field reserved in the kunit_test structure for passing
> arbitrary data to the test function?
>

Yes, priv stands for private and is indeed a field reserved for
passing arbitrary user data. Updated the comments to explain better.

> The lifecycle of the data in test->priv is a unclear to me.  Here, the data appears
> to be static, but it's unclear why you would need to pass a structure containing static data
> to the test function.  Would the data for these fields (want_foo_called_with and foo_result)
> be filled in at test invocation time from a list (like from parameterized tests)?
>

In general, priv contains user data and so its lifecycle is upto the
user. KUnit itself has no specific requirements for it. We tried to
explain this in our new updated version of this patch.

> >
> > -.. code-block:: bash
> > +             test_data->foo_result = 42;
> > +             test_data->want_foo_called_with = 1;
> >
> > -     modprobe example-test
> > +             /* In a real test, we'd probably pass a pointer to fake_foo somewhere
> > +              * like an ops struct, etc. instead of calling it directly. */
> > +             KUNIT_EXPECT_EQ(test, fake_foo(1), 42);
> > +     }
>
> OK - I'm totally lost at this point.
>

This example and surrounding text were moved verbatim from the old
tips page. At this stage we are focusing on reorganizing documentation
and will take care of this in upcoming patches.

> >
> > -...will run the tests.
> >
> > -.. note::
> > -   Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig
> > -   if the test does not support module build.  Otherwise, it will trigger
> > -   compile errors if ``CONFIG_KUNIT`` is ``m``.
> > +Note: here we are able to get away with using ``test->priv``, but if we want
> > +something more flexible we could use a named ``kunit_resource``, see
> > +Documentation/dev-tools/kunit/api/test.rst.
> >
> > -Writing new tests for other architectures
> > ------------------------------------------
> > +Failing The Current Test
> > +------------------------
> >
> > -The first thing you must do is ask yourself whether it is necessary to write a
> > -KUnit test for a specific architecture, and then whether it is necessary to
> > -write that test for a particular piece of hardware. In general, writing a test
> > -that depends on having access to a particular piece of hardware or software (not
> > -included in the Linux source repo) should be avoided at all costs.
> > +If we want to fail the current test, we can use ``kunit_fail_current_test(fmt, args...)``
> > +which is defined in ``<kunit/test-bug.h>`` and does not require pulling in ``<kunit/test.h>``.
> > +For example, we have an option to enable some extra debug checks on some data
> > +structures as shown below:
> >
> > -Even if you only ever plan on running your KUnit test on your hardware
> > -configuration, other people may want to run your tests and may not have access
> > -to your hardware. If you write your test to run on UML, then anyone can run your
> > -tests without knowing anything about your particular setup, and you can still
> > -run your tests on your hardware setup just by compiling for your architecture.
> > +.. code-block:: c
> >
> > -.. important::
> > -   Always prefer tests that run on UML to tests that only run under a particular
> > -   architecture, and always prefer tests that run under QEMU or another easy
> > -   (and monetarily free) to obtain software environment to a specific piece of
> > -   hardware.
> > -
> > -Nevertheless, there are still valid reasons to write an architecture or hardware
> > -specific test: for example, you might want to test some code that really belongs
> > -in ``arch/some-arch/*``. Even so, try your best to write the test so that it
> > -does not depend on physical hardware: if some of your test cases don't need the
> > -hardware, only require the hardware for tests that actually need it.
> > -
> > -Now that you have narrowed down exactly what bits are hardware specific, the
> > -actual procedure for writing and running the tests is pretty much the same as
> > -writing normal KUnit tests. One special caveat is that you have to reset
> > -hardware state in between test cases; if this is not possible, you may only be
> > -able to run one test case per invocation.
> > +     #include <kunit/test-bug.h>
> >
> > -.. TODO(brendanhiggins@google.com): Add an actual example of an architecture-
> > -   dependent KUnit test.
> > +     #ifdef CONFIG_EXTRA_DEBUG_CHECKS
> > +     static void validate_my_data(struct data *data)
> > +     {
> > +             if (is_valid(data))
> > +                     return;
> >
> > -KUnit debugfs representation
> > -============================
> > -When kunit test suites are initialized, they create an associated directory
> > -in ``/sys/kernel/debug/kunit/<test-suite>``.  The directory contains one file
> > +             kunit_fail_current_test("data %p is invalid", data);
> >
> > -- results: "cat results" displays results of each test case and the results
> > -  of the entire suite for the last test run.
> > +             /* Normal, non-KUnit, error reporting code here. */
> > +     }
> > +     #else
> > +     static void my_debug_function(void) { }
> > +     #endif
> >
> > -The debugfs representation is primarily of use when kunit test suites are
> > -run in a native environment, either as modules or builtin.  Having a way
> > -to display results like this is valuable as otherwise results can be
> > -intermixed with other events in dmesg output.  The maximum size of each
> > -results file is KUNIT_LOG_SIZE bytes (defined in ``include/kunit/test.h``).
> > --
> > 2.34.1.400.ga245620fadb-goog
>
>
> Please provide some more explanation about
> accessing the KUnit test at runtime.  I couldn't
> follow what was going on in that section.
>  -- Tim
>

We expanded a bit but will expand more in future patches. In the
meantime, there is some related information in:
https://kunit.dev/mocking.html#storing-and-accessing-state-for-fakes-mocks


Regards,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests
  2021-12-10  5:31     ` Harinder Singh
@ 2021-12-10 17:16       ` Tim.Bird
  2021-12-16  5:43         ` Harinder Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Tim.Bird @ 2021-12-10 17:16 UTC (permalink / raw)
  To: sharinder
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Thanks for responding to my review.  I reviewed the remaining patches (v3 patches 6 and 7)
and found no issues.

 -- Tim

> -----Original Message-----
> From: Harinder Singh <sharinder@google.com>
> 
> Hello Tim,
> 
> Thanks for providing review comments.
> 
> Please see my comments below.
...


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture
  2021-12-07  5:40 ` [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture Harinder Singh
  2021-12-07 17:24   ` Tim.Bird
@ 2021-12-10 23:08   ` Marco Elver
  2021-12-16  6:12     ` Harinder Singh
  1 sibling, 1 reply; 22+ messages in thread
From: Marco Elver @ 2021-12-10 23:08 UTC (permalink / raw)
  To: Harinder Singh
  Cc: davidgow, brendanhiggins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel, Tim.Bird

On Tue, 7 Dec 2021 at 06:41, 'Harinder Singh' via KUnit Development
<kunit-dev@googlegroups.com> wrote:
>
> Describe the components of KUnit and how the kernel mode parts
> interact with kunit_tool.
>
> Signed-off-by: Harinder Singh <sharinder@google.com>
> ---

You are including several external links to kernel sources via
elixir.bootlin.com. This should be avoided, where kernel.org
alternatives exist.

See one of my comments below which gives an example how you can avoid
this, either by providing a kernel.org link, or better, rendering the
kernel-doc in ReST where appropriate. You should be able to test this
with "make htmldocs".

>  .../dev-tools/kunit/architecture.rst          | 206 ++++++++++++++++++
>  Documentation/dev-tools/kunit/index.rst       |   2 +
>  .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
>  Documentation/dev-tools/kunit/start.rst       |   1 +
>  4 files changed, 209 insertions(+)
>  create mode 100644 Documentation/dev-tools/kunit/architecture.rst
>  create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
>
> diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
> new file mode 100644
> index 000000000000..bb0fb3e3ed01
> --- /dev/null
> +++ b/Documentation/dev-tools/kunit/architecture.rst
> @@ -0,0 +1,206 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==================
> +KUnit Architecture
> +==================
> +
> +The KUnit architecture can be divided into two parts:
> +
> +- Kernel testing library
> +- kunit_tool (Command line test harness)
> +
> +In-Kernel Testing Framework
> +===========================
> +
> +The kernel testing library supports KUnit tests written in C using
> +KUnit. KUnit tests are kernel code. KUnit does several things:
> +
> +- Organizes tests
> +- Reports test results
> +- Provides test utilities
> +
> +Test Cases
> +----------
> +
> +The fundamental unit in KUnit is the test case. The KUnit test cases are
> +grouped into KUnit suites. A KUnit test case is a function with type
> +signature ``void (*)(struct kunit *test)``.
> +These test case functions are wrapped in a struct called
> +``struct kunit_case``. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/include/kunit/test.h#L145
> +
> +It includes:
> +
> +- ``run_case``: the function implementing the actual test case.
> +- ``name``: the test case name.
> +- ``generate_params``: the parameterized tests generator function. This
> +  is optional for non-parameterized tests.
> +
> +Each KUnit test case gets a ``struct kunit`` context
> +object passed to it that tracks a running test. The KUnit assertion
> +macros and other KUnit utilities use the ``struct kunit`` context
> +object. As an exception, there are two fields:
> +
> +- ``->priv``: The setup functions can use it to store arbitrary test
> +  user data.
> +
> +- ``->param_value``: It contains the parameter value which can be
> +  retrieved in the parameterized tests.
> +
> +Test Suites
> +-----------
> +
> +A KUnit suite includes a collection of test cases. The KUnit suites
> +are represented by the ``struct kunit_suite``. For example:
> +
> +.. code-block:: c
> +
> +       static struct kunit_case example_test_cases[] = {
> +               KUNIT_CASE(example_test_foo),
> +               KUNIT_CASE(example_test_bar),
> +               KUNIT_CASE(example_test_baz),
> +               {}
> +       };
> +
> +       static struct kunit_suite example_test_suite = {
> +               .name = "example",
> +               .init = example_test_init,
> +               .exit = example_test_exit,
> +               .test_cases = example_test_cases,
> +       };
> +       kunit_test_suite(example_test_suite);
> +
> +In the above example, the test suite ``example_test_suite``, runs the
> +test cases ``example_test_foo``, ``example_test_bar``, and
> +``example_test_baz``. Before running the test, the ``example_test_init``
> +is called and after running the test, ``example_test_exit`` is called.
> +The ``kunit_test_suite(example_test_suite)`` registers the test suite
> +with the KUnit test framework.
> +
> +Executor
> +--------
> +
> +The KUnit executor can list and run built-in KUnit tests on boot.
> +The Test suites are stored in a linker section
> +called ``.kunit_test_suites``. For code, see:
> +https://elixir.bootlin.com/linux/v5.12/source/include/asm-generic/vmlinux.lds.h#L918.
> +The linker section consists of an array of pointers to
> +``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
> +macro. To run all tests compiled into the kernel, the KUnit executor
> +iterates over the linker section array.
> +
> +.. kernel-figure:: kunit_suitememorydiagram.png
> +       :alt:   KUnit Suite Memory
> +
> +       KUnit Suite Memory Diagram
> +
> +On the kernel boot, the KUnit executor uses the start and end addresses
> +of this section to iterate over and run all tests. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/lib/kunit/executor.c
> +
> +When built as a module, the ``kunit_test_suites()`` macro defines a
> +``module_init()`` function, which runs all the tests in the compilation
> +unit instead of utilizing the executor.
> +
> +In KUnit tests, some error classes do not affect other tests
> +or parts of the kernel, each KUnit case executes in a separate thread
> +context. For code, see:
> +https://elixir.bootlin.com/linux/latest/source/lib/kunit/try-catch.c#L58
> +
> +Assertion Macros
> +----------------
> +
> +KUnit tests verify state using expectations/assertions.
> +All expectations/assertions are formatted as:
> +``KUNIT_{EXPECT|ASSERT}_<op>[_MSG](kunit, property[, message])``
> +
> +- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
> +  expectation.
> +
> +       - For an expectation, if the check fails, marks the test as failed
> +         and logs the failure.
> +
> +       - An assertion, on failure, causes the test case to terminate
> +         immediately.
> +
> +               - Assertions call function:
> +                 ``void __noreturn kunit_abort(struct kunit *)``.
> +
> +               - ``kunit_abort`` calls function:
> +                 ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
> +
> +               - ``kunit_try_catch_throw`` calls function:
> +                 ``void complete_and_exit(struct completion *, long) __noreturn;``
> +                 and terminates the special thread context.
> +
> +- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
> +  has the boolean value “true”), ``EQ`` (two supplied properties are
> +  equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
> +  contain an “err” value).
> +
> +- ``[_MSG]`` prints a custom message on failure.
> +
> +Test Result Reporting
> +---------------------
> +KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
> +https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
> +KTAP (yet to be standardized format) works with KUnit and Kselftest.
> +The KUnit executor prints KTAP results to dmesg, and debugfs
> +(if configured).
> +
> +Parameterized Tests
> +-------------------
> +
> +Each KUnit parameterized test is associated with a collection of
> +parameters. The test is invoked multiple times, once for each parameter
> +value and the parameter is stored in the ``param_value`` field.
> +The test case includes a ``KUNIT_CASE_PARAM()`` macro that accepts a
> +generator function.
> +The generator function returns the next parameter given to the
> +previous parameter in parameterized tests. It also provides a macro to
> +generate common-case generators based on arrays.
> +
> +For code, see:
> +https://elixir.bootlin.com/linux/v5.12/source/include/kunit/test.h#L1783

This is a link to an external mirror of the kernel, which should not
be used. If you must point to a specific version and line of the
kernel, use a kernel.org link:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/kunit/test.h?h=v5.15#n1872

and ideally using a ReST link.

Furthermore, ReST actually lets you select to inline certain
documentation, which would be appropriate in this case. This can be
done via the ".. kernel-doc: <file>" directive, and you can select
which identifier you want to render in the final document. See
https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html#including-kernel-doc-comments

> +
> +kunit_tool (Command Line Test Harness)
> +======================================
> +
> +kunit_tool is a Python script ``(tools/testing/kunit/kunit.py)``
> +that can be used to configure, build, exec, parse and run (runs other
> +commands in order) test results. You can either run KUnit tests using
> +kunit_tool or can include KUnit in kernel and parse manually.
> +
> +- ``configure`` command generates the kernel ``.config`` from a
> +  ``.kunitconfig`` file (and any architecture-specific options).
> +  For some architectures, additional config options are specified in the
> +  ``qemu_config`` Python script
> +  (For example: ``tools/testing/kunit/qemu_configs/powerpc.py``).
> +  It parses both the existing ``.config`` and the ``.kunitconfig`` files
> +  and ensures that ``.config`` is a superset of ``.kunitconfig``.
> +  If this is not the case, it will combine the two and run
> +  ``make olddefconfig`` to regenerate the ``.config`` file. It then
> +  verifies that ``.config`` is now a superset. This checks if all
> +  Kconfig dependencies are correctly specified in ``.kunitconfig``.
> +  ``kunit_config.py`` includes the parsing Kconfigs code. The code which
> +  runs ``make olddefconfig`` is a part of ``kunit_kernel.py``. You can
> +  invoke this command via: ``./tools/testing/kunit/kunit.py config`` and
> +  generate a ``.config`` file.
> +- ``build`` runs ``make`` on the kernel tree with required options
> +  (depends on the architecture and some options, for example: build_dir)
> +  and reports any errors.
> +  To build a KUnit kernel from the current ``.config``, you can use the
> +  ``build`` argument: ``./tools/testing/kunit/kunit.py build``.
> +- ``exec`` command executes kernel results either directly (using
> +  User-mode Linux configuration), or via an emulator such
> +  as QEMU. It reads results from the log via standard
> +  output (stdout), and passes them to ``parse`` to be parsed.
> +  If you already have built a kernel with built-in KUnit tests,
> +  you can run the kernel and display the test results with the ``exec``
> +  argument: ``./tools/testing/kunit/kunit.py exec``.
> +- ``parse`` extracts the KTAP output from a kernel log, parses
> +  the test results, and prints a summary. For failed tests, any
> +  diagnostic output will be included.
> diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> index ebf4bffaa1ca..75e4ae85adbb 100644
> --- a/Documentation/dev-tools/kunit/index.rst
> +++ b/Documentation/dev-tools/kunit/index.rst
> @@ -9,6 +9,7 @@ KUnit - Linux Kernel Unit Testing
>         :caption: Contents:
>
>         start
> +       architecture
>         usage
>         kunit-tool
>         api/index
> @@ -96,6 +97,7 @@ How do I use it?
>  ================
>
>  *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
> +*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
>  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
>  *   Documentation/dev-tools/kunit/tips.rst - best practices with
>      examples.
> diff --git a/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png b/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
> new file mode 100644
> index 0000000000000000000000000000000000000000..a1aa7c3b0f63edfea83eb1cef3e2257b47b5ca7b
> GIT binary patch

I think adding binary blobs like this is quite unusual.

There currently are no .png files in the kernel repo, and this would
be the first.

How difficult is it to create an ascii diagram?

> diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
> index 55f8df1abd40..5dd2c88fa2bd 100644
> --- a/Documentation/dev-tools/kunit/start.rst
> +++ b/Documentation/dev-tools/kunit/start.rst
> @@ -240,6 +240,7 @@ Congrats! You just wrote your first KUnit test.
>  Next Steps
>  ==========
>
> +*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
>  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
>  *   Documentation/dev-tools/kunit/tips.rst - best practices with
>      examples.
> --
> 2.34.1.400.ga245620fadb-goog
>
> --
> You received this message because you are subscribed to the Google Groups "KUnit Development" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kunit-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kunit-dev/20211207054019.1455054-4-sharinder%40google.com.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests
  2021-12-10 17:16       ` Tim.Bird
@ 2021-12-16  5:43         ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-16  5:43 UTC (permalink / raw)
  To: Tim.Bird
  Cc: David Gow, Brendan Higgins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel

Hello Tim,

On Fri, Dec 10, 2021 at 10:46 PM <Tim.Bird@sony.com> wrote:
>
> Thanks for responding to my review.  I reviewed the remaining patches (v3 patches 6 and 7)
> and found no issues.
>
Do you want me add your name as reviewed by you for patches 6 and 7?
>  -- Tim
>
> > -----Original Message-----
> > From: Harinder Singh <sharinder@google.com>
> >
> > Hello Tim,
> >
> > Thanks for providing review comments.
> >
> > Please see my comments below.
> ...
>
Thanks,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture
  2021-12-10 23:08   ` Marco Elver
@ 2021-12-16  6:12     ` Harinder Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Harinder Singh @ 2021-12-16  6:12 UTC (permalink / raw)
  To: Marco Elver
  Cc: David Gow, Brendan Higgins, shuah, corbet, linux-kselftest,
	kunit-dev, linux-doc, linux-kernel, Tim.Bird

Hello Marco,

See my comments below.

On Sat, Dec 11, 2021 at 4:38 AM Marco Elver <elver@google.com> wrote:
>
> On Tue, 7 Dec 2021 at 06:41, 'Harinder Singh' via KUnit Development
> <kunit-dev@googlegroups.com> wrote:
> >
> > Describe the components of KUnit and how the kernel mode parts
> > interact with kunit_tool.
> >
> > Signed-off-by: Harinder Singh <sharinder@google.com>
> > ---
>
> You are including several external links to kernel sources via
> elixir.bootlin.com. This should be avoided, where kernel.org
> alternatives exist.
>
> See one of my comments below which gives an example how you can avoid
> this, either by providing a kernel.org link, or better, rendering the
> kernel-doc in ReST where appropriate. You should be able to test this
> with "make htmldocs".
>
I used kernel-doc directive where I thought it made sense. Elsewhere I
replaced Elixir links with git.kernel.org links.
Please see follow up patches.

> >  .../dev-tools/kunit/architecture.rst          | 206 ++++++++++++++++++
> >  Documentation/dev-tools/kunit/index.rst       |   2 +
> >  .../kunit/kunit_suitememorydiagram.png        | Bin 0 -> 24174 bytes
> >  Documentation/dev-tools/kunit/start.rst       |   1 +
> >  4 files changed, 209 insertions(+)
> >  create mode 100644 Documentation/dev-tools/kunit/architecture.rst
> >  create mode 100644 Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
> >
> > diff --git a/Documentation/dev-tools/kunit/architecture.rst b/Documentation/dev-tools/kunit/architecture.rst
> > new file mode 100644
> > index 000000000000..bb0fb3e3ed01
> > --- /dev/null
> > +++ b/Documentation/dev-tools/kunit/architecture.rst
> > @@ -0,0 +1,206 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +==================
> > +KUnit Architecture
> > +==================
> > +
> > +The KUnit architecture can be divided into two parts:
> > +
> > +- Kernel testing library
> > +- kunit_tool (Command line test harness)
> > +
> > +In-Kernel Testing Framework
> > +===========================
> > +
> > +The kernel testing library supports KUnit tests written in C using
> > +KUnit. KUnit tests are kernel code. KUnit does several things:
> > +
> > +- Organizes tests
> > +- Reports test results
> > +- Provides test utilities
> > +
> > +Test Cases
> > +----------
> > +
> > +The fundamental unit in KUnit is the test case. The KUnit test cases are
> > +grouped into KUnit suites. A KUnit test case is a function with type
> > +signature ``void (*)(struct kunit *test)``.
> > +These test case functions are wrapped in a struct called
> > +``struct kunit_case``. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/include/kunit/test.h#L145
> > +
> > +It includes:
> > +
> > +- ``run_case``: the function implementing the actual test case.
> > +- ``name``: the test case name.
> > +- ``generate_params``: the parameterized tests generator function. This
> > +  is optional for non-parameterized tests.
> > +
> > +Each KUnit test case gets a ``struct kunit`` context
> > +object passed to it that tracks a running test. The KUnit assertion
> > +macros and other KUnit utilities use the ``struct kunit`` context
> > +object. As an exception, there are two fields:
> > +
> > +- ``->priv``: The setup functions can use it to store arbitrary test
> > +  user data.
> > +
> > +- ``->param_value``: It contains the parameter value which can be
> > +  retrieved in the parameterized tests.
> > +
> > +Test Suites
> > +-----------
> > +
> > +A KUnit suite includes a collection of test cases. The KUnit suites
> > +are represented by the ``struct kunit_suite``. For example:
> > +
> > +.. code-block:: c
> > +
> > +       static struct kunit_case example_test_cases[] = {
> > +               KUNIT_CASE(example_test_foo),
> > +               KUNIT_CASE(example_test_bar),
> > +               KUNIT_CASE(example_test_baz),
> > +               {}
> > +       };
> > +
> > +       static struct kunit_suite example_test_suite = {
> > +               .name = "example",
> > +               .init = example_test_init,
> > +               .exit = example_test_exit,
> > +               .test_cases = example_test_cases,
> > +       };
> > +       kunit_test_suite(example_test_suite);
> > +
> > +In the above example, the test suite ``example_test_suite``, runs the
> > +test cases ``example_test_foo``, ``example_test_bar``, and
> > +``example_test_baz``. Before running the test, the ``example_test_init``
> > +is called and after running the test, ``example_test_exit`` is called.
> > +The ``kunit_test_suite(example_test_suite)`` registers the test suite
> > +with the KUnit test framework.
> > +
> > +Executor
> > +--------
> > +
> > +The KUnit executor can list and run built-in KUnit tests on boot.
> > +The Test suites are stored in a linker section
> > +called ``.kunit_test_suites``. For code, see:
> > +https://elixir.bootlin.com/linux/v5.12/source/include/asm-generic/vmlinux.lds.h#L918.
> > +The linker section consists of an array of pointers to
> > +``struct kunit_suite``, and is populated by the ``kunit_test_suites()``
> > +macro. To run all tests compiled into the kernel, the KUnit executor
> > +iterates over the linker section array.
> > +
> > +.. kernel-figure:: kunit_suitememorydiagram.png
> > +       :alt:   KUnit Suite Memory
> > +
> > +       KUnit Suite Memory Diagram
> > +
> > +On the kernel boot, the KUnit executor uses the start and end addresses
> > +of this section to iterate over and run all tests. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/lib/kunit/executor.c
> > +
> > +When built as a module, the ``kunit_test_suites()`` macro defines a
> > +``module_init()`` function, which runs all the tests in the compilation
> > +unit instead of utilizing the executor.
> > +
> > +In KUnit tests, some error classes do not affect other tests
> > +or parts of the kernel, each KUnit case executes in a separate thread
> > +context. For code, see:
> > +https://elixir.bootlin.com/linux/latest/source/lib/kunit/try-catch.c#L58
> > +
> > +Assertion Macros
> > +----------------
> > +
> > +KUnit tests verify state using expectations/assertions.
> > +All expectations/assertions are formatted as:
> > +``KUNIT_{EXPECT|ASSERT}_<op>[_MSG](kunit, property[, message])``
> > +
> > +- ``{EXPECT|ASSERT}`` determines whether the check is an assertion or an
> > +  expectation.
> > +
> > +       - For an expectation, if the check fails, marks the test as failed
> > +         and logs the failure.
> > +
> > +       - An assertion, on failure, causes the test case to terminate
> > +         immediately.
> > +
> > +               - Assertions call function:
> > +                 ``void __noreturn kunit_abort(struct kunit *)``.
> > +
> > +               - ``kunit_abort`` calls function:
> > +                 ``void __noreturn kunit_try_catch_throw(struct kunit_try_catch *try_catch)``.
> > +
> > +               - ``kunit_try_catch_throw`` calls function:
> > +                 ``void complete_and_exit(struct completion *, long) __noreturn;``
> > +                 and terminates the special thread context.
> > +
> > +- ``<op>`` denotes a check with options: ``TRUE`` (supplied property
> > +  has the boolean value “true”), ``EQ`` (two supplied properties are
> > +  equal), ``NOT_ERR_OR_NULL`` (supplied pointer is not null and does not
> > +  contain an “err” value).
> > +
> > +- ``[_MSG]`` prints a custom message on failure.
> > +
> > +Test Result Reporting
> > +---------------------
> > +KUnit prints test results in KTAP format. KTAP is based on TAP14, see:
> > +https://github.com/isaacs/testanything.github.io/blob/tap14/tap-version-14-specification.md.
> > +KTAP (yet to be standardized format) works with KUnit and Kselftest.
> > +The KUnit executor prints KTAP results to dmesg, and debugfs
> > +(if configured).
> > +
> > +Parameterized Tests
> > +-------------------
> > +
> > +Each KUnit parameterized test is associated with a collection of
> > +parameters. The test is invoked multiple times, once for each parameter
> > +value and the parameter is stored in the ``param_value`` field.
> > +The test case includes a ``KUNIT_CASE_PARAM()`` macro that accepts a
> > +generator function.
> > +The generator function returns the next parameter given to the
> > +previous parameter in parameterized tests. It also provides a macro to
> > +generate common-case generators based on arrays.
> > +
> > +For code, see:
> > +https://elixir.bootlin.com/linux/v5.12/source/include/kunit/test.h#L1783
>
> This is a link to an external mirror of the kernel, which should not
> be used. If you must point to a specific version and line of the
> kernel, use a kernel.org link:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/kunit/test.h?h=v5.15#n1872
>
> and ideally using a ReST link.
>
> Furthermore, ReST actually lets you select to inline certain
> documentation, which would be appropriate in this case. This can be
> done via the ".. kernel-doc: <file>" directive, and you can select
> which identifier you want to render in the final document. See
> https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html#including-kernel-doc-comments
>
> > +
> > +kunit_tool (Command Line Test Harness)
> > +======================================
> > +
> > +kunit_tool is a Python script ``(tools/testing/kunit/kunit.py)``
> > +that can be used to configure, build, exec, parse and run (runs other
> > +commands in order) test results. You can either run KUnit tests using
> > +kunit_tool or can include KUnit in kernel and parse manually.
> > +
> > +- ``configure`` command generates the kernel ``.config`` from a
> > +  ``.kunitconfig`` file (and any architecture-specific options).
> > +  For some architectures, additional config options are specified in the
> > +  ``qemu_config`` Python script
> > +  (For example: ``tools/testing/kunit/qemu_configs/powerpc.py``).
> > +  It parses both the existing ``.config`` and the ``.kunitconfig`` files
> > +  and ensures that ``.config`` is a superset of ``.kunitconfig``.
> > +  If this is not the case, it will combine the two and run
> > +  ``make olddefconfig`` to regenerate the ``.config`` file. It then
> > +  verifies that ``.config`` is now a superset. This checks if all
> > +  Kconfig dependencies are correctly specified in ``.kunitconfig``.
> > +  ``kunit_config.py`` includes the parsing Kconfigs code. The code which
> > +  runs ``make olddefconfig`` is a part of ``kunit_kernel.py``. You can
> > +  invoke this command via: ``./tools/testing/kunit/kunit.py config`` and
> > +  generate a ``.config`` file.
> > +- ``build`` runs ``make`` on the kernel tree with required options
> > +  (depends on the architecture and some options, for example: build_dir)
> > +  and reports any errors.
> > +  To build a KUnit kernel from the current ``.config``, you can use the
> > +  ``build`` argument: ``./tools/testing/kunit/kunit.py build``.
> > +- ``exec`` command executes kernel results either directly (using
> > +  User-mode Linux configuration), or via an emulator such
> > +  as QEMU. It reads results from the log via standard
> > +  output (stdout), and passes them to ``parse`` to be parsed.
> > +  If you already have built a kernel with built-in KUnit tests,
> > +  you can run the kernel and display the test results with the ``exec``
> > +  argument: ``./tools/testing/kunit/kunit.py exec``.
> > +- ``parse`` extracts the KTAP output from a kernel log, parses
> > +  the test results, and prints a summary. For failed tests, any
> > +  diagnostic output will be included.
> > diff --git a/Documentation/dev-tools/kunit/index.rst b/Documentation/dev-tools/kunit/index.rst
> > index ebf4bffaa1ca..75e4ae85adbb 100644
> > --- a/Documentation/dev-tools/kunit/index.rst
> > +++ b/Documentation/dev-tools/kunit/index.rst
> > @@ -9,6 +9,7 @@ KUnit - Linux Kernel Unit Testing
> >         :caption: Contents:
> >
> >         start
> > +       architecture
> >         usage
> >         kunit-tool
> >         api/index
> > @@ -96,6 +97,7 @@ How do I use it?
> >  ================
> >
> >  *   Documentation/dev-tools/kunit/start.rst - for KUnit new users.
> > +*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> >  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> >  *   Documentation/dev-tools/kunit/tips.rst - best practices with
> >      examples.
> > diff --git a/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png b/Documentation/dev-tools/kunit/kunit_suitememorydiagram.png
> > new file mode 100644
> > index 0000000000000000000000000000000000000000..a1aa7c3b0f63edfea83eb1cef3e2257b47b5ca7b
> > GIT binary patch
>
> I think adding binary blobs like this is quite unusual.
>
> There currently are no .png files in the kernel repo, and this would
> be the first.
>
> How difficult is it to create an ascii diagram?
>
There are a lot of .svg files in the documentation. I think it is fine
to add .png files. We are not creating a new president here.
I do not have experience of creating ASCII diagrams. This diagram is
somewhat complicated. We can try that in a follow up patch. Is this
ok?

> > diff --git a/Documentation/dev-tools/kunit/start.rst b/Documentation/dev-tools/kunit/start.rst
> > index 55f8df1abd40..5dd2c88fa2bd 100644
> > --- a/Documentation/dev-tools/kunit/start.rst
> > +++ b/Documentation/dev-tools/kunit/start.rst
> > @@ -240,6 +240,7 @@ Congrats! You just wrote your first KUnit test.
> >  Next Steps
> >  ==========
> >
> > +*   Documentation/dev-tools/kunit/architecture.rst - KUnit architecture.
> >  *   Documentation/dev-tools/kunit/usage.rst - KUnit features.
> >  *   Documentation/dev-tools/kunit/tips.rst - best practices with
> >      examples.
> > --
> > 2.34.1.400.ga245620fadb-goog
> >
> > --
> > You received this message because you are subscribed to the Google Groups "KUnit Development" group.
> > To unsubscribe from this group and stop receiving emails from it, send an email to kunit-dev+unsubscribe@googlegroups.com.
> > To view this discussion on the web visit https://groups.google.com/d/msgid/kunit-dev/20211207054019.1455054-4-sharinder%40google.com.

Thanks,
Harinder Singh

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2021-12-16  6:13 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-07  5:40 [PATCH v2 0/7] Documentation: KUnit: Rework KUnit documentation Harinder Singh
2021-12-07  5:40 ` [PATCH v2 1/7] Documentation: KUnit: Rewrite main page Harinder Singh
2021-12-07 17:11   ` Tim.Bird
2021-12-10  5:30     ` Harinder Singh
2021-12-07  5:40 ` [PATCH v2 2/7] Documentation: KUnit: Rewrite getting started Harinder Singh
2021-12-07  5:40 ` [PATCH v2 3/7] Documentation: KUnit: Added KUnit Architecture Harinder Singh
2021-12-07 17:24   ` Tim.Bird
2021-12-10  5:31     ` Harinder Singh
2021-12-10 23:08   ` Marco Elver
2021-12-16  6:12     ` Harinder Singh
2021-12-07  5:40 ` [PATCH v2 4/7] Documentation: kunit: Reorganize documentation related to running tests Harinder Singh
2021-12-07 17:33   ` Tim.Bird
2021-12-10  5:31     ` Harinder Singh
2021-12-07  5:40 ` [PATCH v2 5/7] Documentation: KUnit: Rework writing page to focus on writing tests Harinder Singh
2021-12-07 18:28   ` Tim.Bird
2021-12-10  5:31     ` Harinder Singh
2021-12-10 17:16       ` Tim.Bird
2021-12-16  5:43         ` Harinder Singh
2021-12-07  5:40 ` [PATCH v2 6/7] Documentation: KUnit: Restyle Test Style and Nomenclature page Harinder Singh
2021-12-07 18:46   ` Tim.Bird
2021-12-10  5:30     ` Harinder Singh
2021-12-07  5:40 ` [PATCH v2 7/7] Documentation: KUnit: Restyled Frequently Asked Questions Harinder Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).