From b889ce4f671a2540b80fac611e99529af4bf066c Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 11:28:16 +0200 Subject: [PATCH 01/20] Tweak test generators docs --- building/tooling/test-generators.md | 116 ++++++++++++---------------- 1 file changed, 49 insertions(+), 67 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 9b4066ea..7cb21c43 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -1,15 +1,51 @@ # Test Generators -A Test Generator is a piece of software that creates a practice exercise's tests from the common [problem specifications](https://github.com/exercism/problem-specifications). -Some tracks also create tests for concept exercises from a similar track-owned data source. +A Test Generator is a piece of software that can automatically generate a practice exercise's tests. +It will do this by converting the exercise's JSON test cases to track-specific tests. -A Test Generator give us these advantages: +## Benefits -1. They allow adding exercises more quickly without writing much boilerplate code. -2. Contributors can focus on the **design** of an exercise immediately. -3. Along the track life, automatic updates of existing tests can lower maintainer workload. +The are three key benefits from having a Test Generator: -## Contributing to Test Generators +1. Adding exercises is simpler and faster. +2. Automate "boring" parts of adding an exercise. +3. Easy to sync tests with latest canonical data. + +## Using + +How to use a Test Generator is track-specific. +Look for instructions in the track's `README.md`, `CONTRIBUTING.md` or the Test Generator code's directory. + +## Goal + +In general, one runs a Test Generator to either: + +1. Generate the tests for a new exercise +2. Update the tests of an existing exercise + +### Generate tests for new exercise + +When adding a new exercise, adding a Test Generator for that exercise allows one to generate the tests file. +Provided the Test Generator itself has already been implemented, adding support for the new exercise will be (far) less work than writing it from scratch. + +### Update tests of existing exercise + +Once a Test Generator has been written for an exercise, you can re-run it to update/sync the exercise with its latest canonical data. +We recommend doing this periodically, to check if there are problematic test cases that need to be updated or new tests added you want to include. + +## Starting point + +There are two possible starting points when implementing a Test Generator for an exercise: + +1. The exercise is new and doesn't yet have any tests +2. The exercise already exists and has existing tests + +In the first case, you're completely free to design the exercise as you see fit. +In the second case, you should try to adhere to the existing tests as much as you can, to not break any existing solutions. + +## Implementation + +At its core, a Test Generator takes in an exercise slug and outputs a test file for that exercise. Each language may have its own Test Generator, written in that language. It adds code and sometimes files to what [`configlet`](/docs/building/configlet) created / updated. @@ -22,6 +58,12 @@ You should also know: - what [`canonical-data.json` in problem specifications](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) may provide. - why ["creating from scratch" is different from "reproducing for updates"](#from-scratch-vs-updating). +## Flow + +There are a couple + +## Contributing to Test Generators + ## Creating a Test Generator from scratch There are various test generators in Exercism's tracks. @@ -42,28 +84,6 @@ The [forum](https://forum.exercism.org/c/exercism/building-exercism/125) also is - [`canonical-data.json` data structure](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) is well documented. There is optional nesting of `cases` arrays in `cases` mixed with actual test cases. - The contents of `input` and `expected` test case keys of `canonical-data.json` vary largely. These can include simple scalar values, lambdas in pseudo code, lists of operations to perform on the students code and any other kind of input or result one can imagine. -### From Scratch vs. Updating - -There are 2 common tasks a test generator may do, that require separate approaches: - -- [Creating tests from scratch](#creating-tests-from-scratch) -- [Reproducing tests for updates](#reproducing-tests-for-updates) - -The reason for this distinction is "designing the exercise" vs. "production-ready code". - -When creating tests from scratch the test generator should provide all the information contained in `canonical-data.json` in the resulting files. -This enables contributors to simply open up the generated test file(s) and find all relevant information interwoven with the tracks boilerplate code. -They then design the exercise's tests and student facing code based on these files rather than on the original `canonical-data.json`. -As there is no knowledge of exercise specific things, yet, a one-fits-all template targeting the boilerplate code can be used. - -When the exercise is already in production, changes in `canonical-data.json` are rarely a reason to change the design of the exercise. -So reproducing tests for updates is based on the existing design and should result in production-ready code. -Much of the additional data presented when creating the exercise from scratch is no longer part of the result. - -Instead, very often additional conversion of test case data is required, which is specific to this exercise. -Most tracks opt for having at least one template per exercise for this. -This way they can represent all the design choices in that template without complicating things too much for further contribution. - ### Creating tests from scratch This is more productive in the beginning of a tracks life. @@ -94,41 +114,3 @@ There are optional things a test generator might do: - Respect `scenarios` for grouping / test case selection - Skip over "reimplemented" test cases (those referred to in a `reimplements` key of another test case) - Update `tests.toml` with `include=false` to reflect tests skipped by `scenarios` / `reimplements` - -### Reproducing tests for updates - -This may become more relevant over track life time. -It is much harder to implement than the "from scratch" part. -If you need to invest much effort here, maybe manual maintenance is more efficient. -Also keep in mind: maintaining the test generator adds to the maintainers workload, too. - -```exercism/note -Choose a flexible and extensible templating engine! -The test cases vary largely between exercises. -They include simple scalar values, lambdas in pseudo code, lists of operations to perform on the students code and any other kind of input or result one can imagine. -``` - -Doing the bare minimum required for a usable updating test generator includes: - -- Read the `canonical-data.json` of the exercise from `configlet` cache or retrieve it from GitHub directly -- If the tracks testing framework supports no nested test case groups, flatten the nested data structure into a list of test cases -- Render the test cases into the exercise specific template(s) located in an exercise's `.meta/` folder - - Render production-ready code that matches the manually designed exercise - - Skip over "reimplemented" test cases (those referred to in a `reimplements` key of another test case) - - Render only test cases selected by `tests.toml` (or another track-specific data source) - -There are different strategies for respecting test case changes like "replace always", "replace when forced to", "use `tests.toml` to ignore replaced test cases" (works like a baseline for known test issues). -None of them is perfect. - -```exercism/note -Don't try to have a versatile one-fits-all template! -There is way too much variation in the exercises to handle all in one template. -``` - -There are optional things a test generator might do: - -- Provide a library of templates and / or extensions to the template engine -- Maintain or respect another track-specific data source than `tests.toml` -- Maintain student code file(s) or additional files required by the track -- Handle `scenarios` for grouping / test case selection -- Have a check functionality (e.g. to run after `configlet sync`) to detect when updating is required From dd76254b0b90faed2a7b99945f3e3eab7725a3f1 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 14:14:27 +0200 Subject: [PATCH 02/20] More updates --- building/tooling/test-generators.md | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 7cb21c43..7ac3fdd5 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -25,13 +25,13 @@ In general, one runs a Test Generator to either: ### Generate tests for new exercise -When adding a new exercise, adding a Test Generator for that exercise allows one to generate the tests file. +Adding a Test Generator for a new exercise allows one to generate its tests file(s). Provided the Test Generator itself has already been implemented, adding support for the new exercise will be (far) less work than writing it from scratch. ### Update tests of existing exercise -Once a Test Generator has been written for an exercise, you can re-run it to update/sync the exercise with its latest canonical data. -We recommend doing this periodically, to check if there are problematic test cases that need to be updated or new tests added you want to include. +Once an exercise has a Test Generator, you can re-run it to update/sync the exercise with its latest canonical data. +We recommend doing this periodically, to check if there are problematic test cases that need to be updated or new tests that you might want to include. ## Starting point @@ -40,8 +40,9 @@ There are two possible starting points when implementing a Test Generator for an 1. The exercise is new and doesn't yet have any tests 2. The exercise already exists and has existing tests -In the first case, you're completely free to design the exercise as you see fit. -In the second case, you should try to adhere to the existing tests as much as you can, to not break any existing solutions. +```exercism/caution +If there are existing tests, implement the Test Generator such that the tests it generates do not break existing solutions. +``` ## Implementation @@ -114,3 +115,7 @@ There are optional things a test generator might do: - Respect `scenarios` for grouping / test case selection - Skip over "reimplemented" test cases (those referred to in a `reimplements` key of another test case) - Update `tests.toml` with `include=false` to reflect tests skipped by `scenarios` / `reimplements` + +``` + +``` From 98e31a68d175e22717b8e97dfbb43dd73cf15720 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 14:44:02 +0200 Subject: [PATCH 03/20] More work --- building/tooling/test-generators.md | 27 ++++++++++++++++++--------- 1 file changed, 18 insertions(+), 9 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 7ac3fdd5..3c25762b 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -44,9 +44,24 @@ There are two possible starting points when implementing a Test Generator for an If there are existing tests, implement the Test Generator such that the tests it generates do not break existing solutions. ``` -## Implementation +## Using configlet -At its core, a Test Generator takes in an exercise slug and outputs a test file for that exercise. +`configlet` is the primary track maintenance tool and can be used to: + +- Create the exercise files for a new exercise: run `bin/configlet create --practice-exercise ` +- Sync the `tests.toml` file of an existing exercise: run `bin/configlet sync --tests --update --exercise ` +- Fetch the exercise's canonical data to disk (this is a side-effect or either of the above commands) + +This makes `configlet` a great tool to use in combination with the Test Generator for some really powerful workflows. + +There are two options to combine `configlet` and the Test Generator: + +- A (shell) script calls both `configlet` and the Test Generator +- The Test Generator directly calls `configlet` + +Which one to use is up to you, the maintainer. + +## TODO Each language may have its own Test Generator, written in that language. It adds code and sometimes files to what [`configlet`](/docs/building/configlet) created / updated. @@ -55,16 +70,10 @@ You should find all the details in the tracks contribution docs or a `README` ne You should also know: -- what [`configlet create`](/docs/building/configlet/create) or [`configlet sync`](/docs/building/configlet/sync) do. +- what [`configlet create`]or [`configlet sync`](/docs/building/configlet/sync) do. - what [`canonical-data.json` in problem specifications](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) may provide. - why ["creating from scratch" is different from "reproducing for updates"](#from-scratch-vs-updating). -## Flow - -There are a couple - -## Contributing to Test Generators - ## Creating a Test Generator from scratch There are various test generators in Exercism's tracks. From c73462ae7c8f94079dfc1fff9c8791f3fc694991 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 14:50:22 +0200 Subject: [PATCH 04/20] CLI --- building/tooling/test-generators.md | 19 +++++++++++++++---- 1 file changed, 15 insertions(+), 4 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 3c25762b..854213be 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -54,12 +54,23 @@ If there are existing tests, implement the Test Generator such that the tests it This makes `configlet` a great tool to use in combination with the Test Generator for some really powerful workflows. -There are two options to combine `configlet` and the Test Generator: +## Command-line interface -- A (shell) script calls both `configlet` and the Test Generator -- The Test Generator directly calls `configlet` +You'll want to make using the Test Generator both easy _and_ powerful. +For that, we recommend creating one or more script files. -Which one to use is up to you, the maintainer. +```exercism/note +You're free to choose whatever script file format fits your track best. +Shell scripts and PowerShell scripts are common options that can both work well. +``` + +Here is an example of a shell script that combines `configlet` and a Test Generator to quickly scaffold a new exercise: + +```shell +bin/fetch-configlet +bin/configlet create --practice-exercise +path/to/test-generator +``` ## TODO From 142b3d882730c52b4f3fc3c0b093a405dc79baa6 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 15:28:06 +0200 Subject: [PATCH 05/20] Docs --- building/tooling/test-generators.md | 105 ++++++++++------------------ 1 file changed, 37 insertions(+), 68 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 854213be..ef06cac3 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -44,6 +44,43 @@ There are two possible starting points when implementing a Test Generator for an If there are existing tests, implement the Test Generator such that the tests it generates do not break existing solutions. ``` +## Design + +Broadly speaking, test files are generated using either: + +- Code: the tests files are (mostly) generated via code +- Templates: the tests files are (mostly) generated using templates + +In general, the code-based approach will lead to fairly complex Test Generator code, whereas the template-based approach is simpler. + +What we recommend is the following flow: + +1. The Test Generator reads the exercise's canonical data +2. The Test Generator converts the exercise's canonical data into a format that can be used in a template +3. The Test Generator passes the exercise's canonical data to an exercise-specific template + +The key benefit of this setup is that each exercise has its own template, which: + +- Makes it obvious how the test files is generated +- Makes them easier to debug +- Makes it safe to edit them without risking breaking another exercise + +```exercism/caution +Some additional things to be aware of when designing the test generator + +- Minimize the pre-processing of canonical data inside the Test Generator +- Try to reduce coupling between templates +``` + +## Implementation + +The Test Generator is usually (mostly) written in the track's language. + +```exercism/caution +While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. +We recommend using the track's language where possible, only using additional languages when it cannot be avoided. +``` + ## Using configlet `configlet` is the primary track maintenance tool and can be used to: @@ -71,71 +108,3 @@ bin/fetch-configlet bin/configlet create --practice-exercise path/to/test-generator ``` - -## TODO - -Each language may have its own Test Generator, written in that language. -It adds code and sometimes files to what [`configlet`](/docs/building/configlet) created / updated. -The code usually is rendered from template files, written for the tracks preferred templating engine. -You should find all the details in the tracks contribution docs or a `README` near the test generator. - -You should also know: - -- what [`configlet create`]or [`configlet sync`](/docs/building/configlet/sync) do. -- what [`canonical-data.json` in problem specifications](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) may provide. -- why ["creating from scratch" is different from "reproducing for updates"](#from-scratch-vs-updating). - -## Creating a Test Generator from scratch - -There are various test generators in Exercism's tracks. -These guidelines are based on the experiences of these tracks. - -Even so test generators work very similar, they are very track specific. -It starts with the choice of the templating engine and ends with additional things they do for each track. -So a common test generator was not and will not be written. - -There were helpful discussions [around the Rust](https://forum.exercism.org/t/advice-for-writing-a-test-generator/7178) and the [JavaScript](https://forum.exercism.org/t/test-generators-for-tracks/10615) test generators. -The [forum](https://forum.exercism.org/c/exercism/building-exercism/125) also is the best place for seeking additional advice. - -### Things to know - -- `configlet` cache with a local copy of the problem specifications is stored in a [location depending on the users system](https://nim-lang.org/docs/osappdirs.html#getCacheDir). - Use `configlet info -o -v d | head -1 | cut -d " " -f 5` to get the location. - Or fetch data from the problem specifications repository directly (`https://raw.githubusercontent.com/exercism/problem-specifications/main/exercises/{{exercise-slug}}/canonical-data.json`) -- [`canonical-data.json` data structure](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) is well documented. There is optional nesting of `cases` arrays in `cases` mixed with actual test cases. -- The contents of `input` and `expected` test case keys of `canonical-data.json` vary largely. These can include simple scalar values, lambdas in pseudo code, lists of operations to perform on the students code and any other kind of input or result one can imagine. - -### Creating tests from scratch - -This is more productive in the beginning of a tracks life. -It is way more easy to implement than the "updating" part. - -Doing only the bare minimum required for a first usable test generator may already help contributors a lot: - -- Read the `canonical-data.json` of the exercise from `configlet` cache or retrieve it from GitHub directly -- Preserve all data (including `comments`, `description` and `scenarios`) -- If the tracks testing framework supports no nested test case groups, flatten the nested data structure into a list of test cases -- Dump the test cases into the one-fits-all boilerplate template(s) - - Preserve the test case grouping for nested test case groups, e.g. - - using the test frameworks grouping capability - - using comments and code folding markers (`{{{`, `}}}`) - - concatenating group `description` and test case `description` - - Show all data (including `comments`, `description` and `scenarios`) - -```exercism/note -Don't try to produce perfect production-ready code! -Dump all data and let the contributor design the exercise from that. -There is way too much variation in the exercises to handle all in one template. -``` - -There are optional things a test generator might do: - -- Provide code for a simple test case (e.g. call a function with `input`, compare result to `expected`) -- Provide boilerplate code for student code file(s) or additional files required by the track -- Respect `scenarios` for grouping / test case selection -- Skip over "reimplemented" test cases (those referred to in a `reimplements` key of another test case) -- Update `tests.toml` with `include=false` to reflect tests skipped by `scenarios` / `reimplements` - -``` - -``` From d497b019d38e85946a1f833261cfc2ddb6f5df80 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 16:24:58 +0200 Subject: [PATCH 06/20] More work --- building/tooling/test-generators.md | 113 ++++++++++++++++++++++++---- 1 file changed, 100 insertions(+), 13 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index ef06cac3..c7d25c22 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -1,7 +1,7 @@ # Test Generators -A Test Generator is a piece of software that can automatically generate a practice exercise's tests. -It will do this by converting the exercise's JSON test cases to track-specific tests. +A Test Generator is a track-specifc piece of software that can automatically generate a practice exercise's tests. +It will do this by converting the exercise's JSON test cases to tests in the format of that track. ## Benefits @@ -11,12 +11,7 @@ The are three key benefits from having a Test Generator: 2. Automate "boring" parts of adding an exercise. 3. Easy to sync tests with latest canonical data. -## Using - -How to use a Test Generator is track-specific. -Look for instructions in the track's `README.md`, `CONTRIBUTING.md` or the Test Generator code's directory. - -## Goal +## Use cases In general, one runs a Test Generator to either: @@ -55,9 +50,10 @@ In general, the code-based approach will lead to fairly complex Test Generator c What we recommend is the following flow: -1. The Test Generator reads the exercise's canonical data -2. The Test Generator converts the exercise's canonical data into a format that can be used in a template -3. The Test Generator passes the exercise's canonical data to an exercise-specific template +1. Reads the exercise's canonical data +2. Exclude the test cases that are marked as `include = false` in the exercise's `tests.toml` file +3. Convert the exercise's canonical data into a format that can be used in a template +4. Pass the exercise's canonical data to an exercise-specific template The key benefit of this setup is that each exercise has its own template, which: @@ -81,7 +77,70 @@ While you're free to use additional languages, each additional language will mak We recommend using the track's language where possible, only using additional languages when it cannot be avoided. ``` -## Using configlet +### Canonical data + +The core data Test Generators work with is an exercise's [`canonical-data.json` file](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson). +This file is defined in the [exercism/problem-specifications repo](https://github.com/exercism/problem-specifications), which defines shared metadata for many Exercism's exercises. + +#### Structure + +Canonical data is defined in a JSON object. +This object contains a `"cases"` field which contains the test cases. +These test cases usually correspond one-to-one to tests in your track. + +Each test case has a couple of properties, like a description, input value(s) and expected value. +Here is a (partial) example of the [canonical-data.json file of the leap exercise](https://github.com/exercism/problem-specifications/blob/main/exercises/leap/canonical-data.json): + +```json +{ + "exercise": "leap", + "cases": [ + { + "uuid": "6466b30d-519c-438e-935d-388224ab5223", + "description": "year not divisible by 4 in common year", + "property": "leapYear", + "input": { + "year": 2015 + }, + "expected": false + }, + { + "uuid": "4fe9b84c-8e65-489e-970b-856d60b8b78e", + "description": "year divisible by 4, not divisible by 100 in leap year", + "property": "leapYear", + "input": { + "year": 1996 + }, + "expected": true + } + ] +} +``` + +The structure of the `canonical-data.json` file is [well documented](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) (there is also a [JSON schema](https://github.com/exercism/problem-specifications/blob/main/canonical-data.schema.json)). + +```exercism/caution +Some exercises use nesting, where `cases` are nested in other `cases`. +Only the innermost `cases` will actually have any test cases, its parent `cases` will only be used for grouping. +Be aware that you might need to combine the descriptions of test case description with its parent description(s) to create a unique test name. +``` + +```exercism/caution +The contents of the `input` and `expected` test case keys vary widely. +In most cases, they'll be scalar values (like numbers, booleans or strings) or simple objects. +However, occasionally you'll also find more complex values that will likely require a bit or preprocessing, such as lambdas in pseudo code, lists of operations to perform on the students code and more. +``` + +#### Reading canonical-data.json files + +There are a couple of options to read the `canonical-data.json` files: + +1. Fetch them directly from the `problem-specifications` repository (e.g. `https://raw.githubusercontent.com/exercism/problem-specifications/main/exercises/leap/canonical-data.json`). +2. Add the `problem-specifications` repo as a Git submodule to the track repo. +3. Read them from the `configlet` cache. + The [location depends on the user's system](https://nim-lang.org/docs/osappdirs.html#getCacheDir), but you can use `configlet info -o -v d | head -1 | cut -d " " -f 5` to programmatically get the location. + +### Using configlet `configlet` is the primary track maintenance tool and can be used to: @@ -91,7 +150,7 @@ We recommend using the track's language where possible, only using additional la This makes `configlet` a great tool to use in combination with the Test Generator for some really powerful workflows. -## Command-line interface +### Command-line interface You'll want to make using the Test Generator both easy _and_ powerful. For that, we recommend creating one or more script files. @@ -108,3 +167,31 @@ bin/fetch-configlet bin/configlet create --practice-exercise path/to/test-generator ``` + +## Building from scratch + +Before you start building a Test Generator, we suggest you look at a couple of existing Test Generators to get a feel for how other tracks have implemented them: + +- [C#](https://github.com/exercism/csharp/blob/main/docs/GENERATORS.md) +- [Clojure](https://github.com/exercism/clojure/blob/main/generator.clj) +- [Common Lisp](https://github.com/exercism/common-lisp/blob/main/bin/lisp_exercise_generator.py) +- [Crystal](https://github.com/exercism/crystal/tree/main/test-generator) +- [Emacs Lisp](https://github.com/exercism/emacs-lisp/blob/main/tools/practice-exercise-generator.el) +- [F#](https://github.com/exercism/fsharp/blob/main/docs/GENERATORS.md) +- [Perl 5](https://github.com/exercism/perl5/tree/main/t/generator) +- [Pharo Smalltalk](https://github.com/exercism/pharo-smalltalk/blob/main/dev/src/ExercismDev/ExercismGenerator.class.st) +- [Python](https://github.com/exercism/python/blob/main/docs/GENERATOR.md) +- [Rust](https://github.com/exercism/rust/blob/main/docs/CONTRIBUTING.md#creating-a-new-exercise) +- [Swift](https://github.com/exercism/swift/tree/main/generator) + +If you have any questions, the [forum](https://forum.exercism.org/c/exercism/building-exercism/125) is the best place to ask them. +The forum discussions [around the Rust](https://forum.exercism.org/t/advice-for-writing-a-test-generator/7178) and the [JavaScript](https://forum.exercism.org/t/test-generators-for-tracks/10615) test generators might be helpful too. + +## Using or contributing + +How to use or contribute to a Test Generator is track-specific. +Look for instructions in the track's `README.md`, `CONTRIBUTING.md` or the Test Generator code's directory. + +``` + +``` From 989c01fa3ff8f5d7d44147a7c5f9e17f1f8f4995 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Tue, 16 Apr 2024 16:33:48 +0200 Subject: [PATCH 07/20] Tweaks --- building/tooling/test-generators.md | 66 +++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index c7d25c22..79701e8c 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -117,6 +117,21 @@ Here is a (partial) example of the [canonical-data.json file of the leap exercis } ``` +The Test Generator's main responsibility is to transform this JSON data into track-specific tests. +Here's how the above + +```nim +import unittest +import leap + +suite "Leap": + test "year not divisible by 4 in common year": + check isLeapYear(2015) == false + + test "year divisible by 4, not divisible by 100 in leap year": + check isLeapYear(1996) == true +``` + The structure of the `canonical-data.json` file is [well documented](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) (there is also a [JSON schema](https://github.com/exercism/problem-specifications/blob/main/canonical-data.schema.json)). ```exercism/caution @@ -140,6 +155,57 @@ There are a couple of options to read the `canonical-data.json` files: 3. Read them from the `configlet` cache. The [location depends on the user's system](https://nim-lang.org/docs/osappdirs.html#getCacheDir), but you can use `configlet info -o -v d | head -1 | cut -d " " -f 5` to programmatically get the location. +#### Track-specific test cases + +If your track would like to add some additional, track-specific test cases (which are not found in the canonical data), as nice option is to allow creating an `additional-test-cases.json` files, which the Test Generator can then merge with the `canonical-data.json` file before passing it to the template for rendering. + +### Templates + +The template engine to use will be track-specific. +Ideally, you'll want your templates to be as straightforward as possible, so don't worry about code duplication and such. + +The templates themselves will get their data from the Test Generator on which they iterate over to render them. + +```exercism/note +To help keep the templates simple, it might be useful to do a little pre-processing on the Test Generator side or else define some "filters" or whatever extension mechanism your templates allow for. +``` + + + ### Using configlet `configlet` is the primary track maintenance tool and can be used to: From 91add0bdc9980676e166ee89393e43a4ca103991 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 09:57:18 +0200 Subject: [PATCH 08/20] Minor rewording --- building/tooling/README.md | 2 +- building/tooling/test-generators.md | 87 +++++++++++------------- building/tracks/new/implement-tooling.md | 2 +- 3 files changed, 42 insertions(+), 49 deletions(-) diff --git a/building/tooling/README.md b/building/tooling/README.md index dcbc5e53..7dd70728 100644 --- a/building/tooling/README.md +++ b/building/tooling/README.md @@ -46,5 +46,5 @@ Track tooling is usually (mostly) written in the track's language. ```exercism/caution While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. -We recommend using the track's language where possible, only using additional languages when it cannot be avoided. +We recommend using the track's language where possible, when it makes maintaining or contributing easier. ``` diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 79701e8c..4c67ed14 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -74,14 +74,19 @@ The Test Generator is usually (mostly) written in the track's language. ```exercism/caution While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. -We recommend using the track's language where possible, only using additional languages when it cannot be avoided. +We recommend using the track's language where possible, only using additional languages when it makes maintaining or contributing easier. ``` ### Canonical data -The core data Test Generators work with is an exercise's [`canonical-data.json` file](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson). +The core data the Test Generator works with is an exercise's [`canonical-data.json` file](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson). This file is defined in the [exercism/problem-specifications repo](https://github.com/exercism/problem-specifications), which defines shared metadata for many Exercism's exercises. +```exercism/note +Not all exercises have a `canonical-data.json` file. +If case they don't, you'll need to manually create the tests, as there is no data for the Test Generator to work with. +``` + #### Structure Canonical data is defined in a JSON object. @@ -134,17 +139,31 @@ suite "Leap": The structure of the `canonical-data.json` file is [well documented](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) (there is also a [JSON schema](https://github.com/exercism/problem-specifications/blob/main/canonical-data.schema.json)). +##### Nesting + +Some exercises use nesting, where `cases` are nested in other `cases` keys. +Only the innermost (leaf) `cases` will actually have any test cases, their parent `cases` will only ever be used for grouping. + ```exercism/caution -Some exercises use nesting, where `cases` are nested in other `cases`. -Only the innermost `cases` will actually have any test cases, its parent `cases` will only be used for grouping. -Be aware that you might need to combine the descriptions of test case description with its parent description(s) to create a unique test name. +If your track does not support grouping tests, you'll need to: + +- Traverse/flatten the `cases` hierarchy to end up with only the innermost (leaf) test cases +- Combine the test case description with its parent description(s) to create a unique test name ``` -```exercism/caution +#### Input and expected values + The contents of the `input` and `expected` test case keys vary widely. In most cases, they'll be scalar values (like numbers, booleans or strings) or simple objects. However, occasionally you'll also find more complex values that will likely require a bit or preprocessing, such as lambdas in pseudo code, lists of operations to perform on the students code and more. -``` + +#### Scenarios + +Test cases have an optional `scenarios` field. +This field can be used by the test generator to special case certain test cases. +The most common use case is to ignore certain types of tests, for example tests with the `"unicode"` scenario as your track's language might not support Unicode. + +The full list of scenarios can be found [here](https://github.com/exercism/problem-specifications/blob/main/SCENARIOS.txt). #### Reading canonical-data.json files @@ -170,42 +189,6 @@ The templates themselves will get their data from the Test Generator on which th To help keep the templates simple, it might be useful to do a little pre-processing on the Test Generator side or else define some "filters" or whatever extension mechanism your templates allow for. ``` - - ### Using configlet `configlet` is the primary track maintenance tool and can be used to: @@ -253,11 +236,21 @@ Before you start building a Test Generator, we suggest you look at a couple of e If you have any questions, the [forum](https://forum.exercism.org/c/exercism/building-exercism/125) is the best place to ask them. The forum discussions [around the Rust](https://forum.exercism.org/t/advice-for-writing-a-test-generator/7178) and the [JavaScript](https://forum.exercism.org/t/test-generators-for-tracks/10615) test generators might be helpful too. -## Using or contributing +### Minimum Viable Product -How to use or contribute to a Test Generator is track-specific. -Look for instructions in the track's `README.md`, `CONTRIBUTING.md` or the Test Generator code's directory. +We recommend incrementally building the Test Generator, starting with a Minimal Viable Product. +A bare minimum version would read an exercise's `canonical-data.json` and just pass that data to the template. -``` +Start by focusing on a single exercise, preferrably a simple one like `leap`. +Only when you have that working should you gradually add more exercises. + +And try to keep the Test Generator as simple as it can be. +```exercism/note +Ideally, a contributor could you just/paste/modify an existing template without having to understand how the Test Generator works internally. ``` + +## Using or contributing + +How to use or contribute to a Test Generator is track-specific. +Look for instructions in the track's `README.md`, `CONTRIBUTING.md` or the Test Generator code's directory. diff --git a/building/tracks/new/implement-tooling.md b/building/tracks/new/implement-tooling.md index 8977b06c..17d6afe5 100644 --- a/building/tracks/new/implement-tooling.md +++ b/building/tracks/new/implement-tooling.md @@ -52,7 +52,7 @@ Track tooling is usually (mostly) written in the track's language. ```exercism/caution While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. -We recommend using the track's language where possible, only using additional languages when it cannot be avoided. +We recommend using the track's language where possible, when it makes maintaining or contributing easier. ``` ## Deployment From 6ea3da53c9d88ca68b4027c11494d4555599d203 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 10:27:08 +0200 Subject: [PATCH 09/20] Work --- building/tooling/test-generators.md | 34 ++++++++++++++--------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 4c67ed14..fb627052 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -1,32 +1,32 @@ # Test Generators -A Test Generator is a track-specifc piece of software that can automatically generate a practice exercise's tests. -It will do this by converting the exercise's JSON test cases to tests in the format of that track. +A Test Generator is a track-specifc piece of software to automatically generate a practice exercise's tests. +It does this by converting the exercise's JSON test cases to tests in the track's language. ## Benefits -The are three key benefits from having a Test Generator: +Some benefits of having a Test Generator are: -1. Adding exercises is simpler and faster. -2. Automate "boring" parts of adding an exercise. -3. Easy to sync tests with latest canonical data. +1. Exercises can be added faster +2. Automates "boring" parts of adding an exercise +3. Easy to sync tests with latest canonical data ## Use cases In general, one runs a Test Generator to either: -1. Generate the tests for a new exercise -2. Update the tests of an existing exercise +1. Generate the tests for a _new_ exercise +2. Update the tests of an _existing_ exercise ### Generate tests for new exercise Adding a Test Generator for a new exercise allows one to generate its tests file(s). -Provided the Test Generator itself has already been implemented, adding support for the new exercise will be (far) less work than writing it from scratch. +Provided the Test Generator itself has already been implemented, generating the tests for the new exercise will be (far) less work than writing them from scratch. ### Update tests of existing exercise Once an exercise has a Test Generator, you can re-run it to update/sync the exercise with its latest canonical data. -We recommend doing this periodically, to check if there are problematic test cases that need to be updated or new tests that you might want to include. +We recommend doing this periodically, to check if there are problematic test cases that need to be updated or new tests you might want to include. ## Starting point @@ -46,7 +46,7 @@ Broadly speaking, test files are generated using either: - Code: the tests files are (mostly) generated via code - Templates: the tests files are (mostly) generated using templates -In general, the code-based approach will lead to fairly complex Test Generator code, whereas the template-based approach is simpler. +We've found that the code-based approach will lead to fairly complex Test Generator code, whereas the template-based approach is simpler. What we recommend is the following flow: @@ -91,9 +91,9 @@ If case they don't, you'll need to manually create the tests, as there is no dat Canonical data is defined in a JSON object. This object contains a `"cases"` field which contains the test cases. -These test cases usually correspond one-to-one to tests in your track. +These test cases (normally) correspond one-to-one to tests in your track. -Each test case has a couple of properties, like a description, input value(s) and expected value. +Each test case has a couple of properties, with the description, property, input value(s) and expected value being the most important ones. Here is a (partial) example of the [canonical-data.json file of the leap exercise](https://github.com/exercism/problem-specifications/blob/main/exercises/leap/canonical-data.json): ```json @@ -123,7 +123,7 @@ Here is a (partial) example of the [canonical-data.json file of the leap exercis ``` The Test Generator's main responsibility is to transform this JSON data into track-specific tests. -Here's how the above +Here's how the above JSON could translate into Nim test code: ```nim import unittest @@ -137,7 +137,7 @@ suite "Leap": check isLeapYear(1996) == true ``` -The structure of the `canonical-data.json` file is [well documented](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) (there is also a [JSON schema](https://github.com/exercism/problem-specifications/blob/main/canonical-data.schema.json)). +The structure of the `canonical-data.json` file is [well documented](https://github.com/exercism/problem-specifications?tab=readme-ov-file#test-data-canonical-datajson) and it also has a [JSON schema](https://github.com/exercism/problem-specifications/blob/main/canonical-data.schema.json) definition. ##### Nesting @@ -176,11 +176,11 @@ There are a couple of options to read the `canonical-data.json` files: #### Track-specific test cases -If your track would like to add some additional, track-specific test cases (which are not found in the canonical data), as nice option is to allow creating an `additional-test-cases.json` files, which the Test Generator can then merge with the `canonical-data.json` file before passing it to the template for rendering. +If your track would like to add some additional, track-specific test cases (which are not found in the canonical data), one option is to creating an `additional-test-cases.json` file, which the Test Generator can then merge with the `canonical-data.json` file before passing it to the template for rendering. ### Templates -The template engine to use will be track-specific. +The template engine to use will likely be track-specific. Ideally, you'll want your templates to be as straightforward as possible, so don't worry about code duplication and such. The templates themselves will get their data from the Test Generator on which they iterate over to render them. From 5352adb5b07918cac260ac192992a4ccd27d1731 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:34:41 +0200 Subject: [PATCH 10/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index fb627052..b53048f9 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -1,6 +1,6 @@ # Test Generators -A Test Generator is a track-specifc piece of software to automatically generate a practice exercise's tests. +A Test Generator is a track-specific piece of software to automatically generate a practice exercise's tests. It does this by converting the exercise's JSON test cases to tests in the track's language. ## Benefits From 424471b4cc92f0913c592bb0223a64838588376a Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:34:45 +0200 Subject: [PATCH 11/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index b53048f9..d7a5b51a 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -50,7 +50,7 @@ We've found that the code-based approach will lead to fairly complex Test Genera What we recommend is the following flow: -1. Reads the exercise's canonical data +1. Read the exercise's canonical data 2. Exclude the test cases that are marked as `include = false` in the exercise's `tests.toml` file 3. Convert the exercise's canonical data into a format that can be used in a template 4. Pass the exercise's canonical data to an exercise-specific template From 92e2074a57d8ab69f9961435245418e89a011f13 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:34:50 +0200 Subject: [PATCH 12/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index d7a5b51a..3da8e746 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -57,7 +57,7 @@ What we recommend is the following flow: The key benefit of this setup is that each exercise has its own template, which: -- Makes it obvious how the test files is generated +- Makes it obvious how the test files are generated - Makes them easier to debug - Makes it safe to edit them without risking breaking another exercise From f0a954f5e27858f003ad66fce61486a8d1addf70 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:34:55 +0200 Subject: [PATCH 13/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 3da8e746..9f502e96 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -84,7 +84,7 @@ This file is defined in the [exercism/problem-specifications repo](https://githu ```exercism/note Not all exercises have a `canonical-data.json` file. -If case they don't, you'll need to manually create the tests, as there is no data for the Test Generator to work with. +In case they don't, you'll need to manually create the tests, as there is no data for the Test Generator to work with. ``` #### Structure From 34bf0b9ff62044ced69df3508ff076a0fbf92464 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:35:27 +0200 Subject: [PATCH 14/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 9f502e96..d2bf1782 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -155,7 +155,7 @@ If your track does not support grouping tests, you'll need to: The contents of the `input` and `expected` test case keys vary widely. In most cases, they'll be scalar values (like numbers, booleans or strings) or simple objects. -However, occasionally you'll also find more complex values that will likely require a bit or preprocessing, such as lambdas in pseudo code, lists of operations to perform on the students code and more. +However, occasionally you'll also find more complex values that will likely require a bit of preprocessing, such as lambdas in pseudo code, lists of operations to perform on the students code and more. #### Scenarios From 8401d05e3da59d7552d99e684b2ee2304f362a38 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:35:32 +0200 Subject: [PATCH 15/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index d2bf1782..6f8196df 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -195,7 +195,7 @@ To help keep the templates simple, it might be useful to do a little pre-process - Create the exercise files for a new exercise: run `bin/configlet create --practice-exercise ` - Sync the `tests.toml` file of an existing exercise: run `bin/configlet sync --tests --update --exercise ` -- Fetch the exercise's canonical data to disk (this is a side-effect or either of the above commands) +- Fetch the exercise's canonical data to disk (this is a side-effect of either of the above commands) This makes `configlet` a great tool to use in combination with the Test Generator for some really powerful workflows. From 7e671b24bcc583fc10e7d3ede77e3a4714e0dead Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:35:42 +0200 Subject: [PATCH 16/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 6f8196df..f6e38e38 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -247,7 +247,7 @@ Only when you have that working should you gradually add more exercises. And try to keep the Test Generator as simple as it can be. ```exercism/note -Ideally, a contributor could you just/paste/modify an existing template without having to understand how the Test Generator works internally. +Ideally, a contributor could just paste/modify an existing template without having to understand how the Test Generator works internally. ``` ## Using or contributing From de831ea8baa1667be5beefbe9776f01852ec094b Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:35:47 +0200 Subject: [PATCH 17/20] Update building/tracks/new/implement-tooling.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tracks/new/implement-tooling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tracks/new/implement-tooling.md b/building/tracks/new/implement-tooling.md index 17d6afe5..bab2f645 100644 --- a/building/tracks/new/implement-tooling.md +++ b/building/tracks/new/implement-tooling.md @@ -52,7 +52,7 @@ Track tooling is usually (mostly) written in the track's language. ```exercism/caution While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. -We recommend using the track's language where possible, when it makes maintaining or contributing easier. +We recommend using the track's language where possible, because it makes maintaining or contributing easier. ``` ## Deployment From baf500078587759517b9736667b3f5e8b4411a68 Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:35:54 +0200 Subject: [PATCH 18/20] Update building/tooling/README.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/README.md b/building/tooling/README.md index 7dd70728..245a626d 100644 --- a/building/tooling/README.md +++ b/building/tooling/README.md @@ -46,5 +46,5 @@ Track tooling is usually (mostly) written in the track's language. ```exercism/caution While you're free to use additional languages, each additional language will make it harder to find people that can maintain or contribute to the track. -We recommend using the track's language where possible, when it makes maintaining or contributing easier. +We recommend using the track's language where possible, because it makes maintaining or contributing easier. ``` From dbaaa5247a7baad386ec9ed527b3b8b3019638fc Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Wed, 17 Apr 2024 15:45:21 +0200 Subject: [PATCH 19/20] Clarify nesting --- building/tooling/test-generators.md | 58 +++++++++++++++++++++++++++-- 1 file changed, 55 insertions(+), 3 deletions(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index f6e38e38..7db02615 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -141,15 +141,67 @@ The structure of the `canonical-data.json` file is [well documented](https://git ##### Nesting -Some exercises use nesting, where `cases` are nested in other `cases` keys. -Only the innermost (leaf) `cases` will actually have any test cases, their parent `cases` will only ever be used for grouping. +Some exercises use nesting in their canonical data. +This means that each element in a `cases` array can be either: + +1. A regular test case (no child test cases) +2. A grouping of test cases (one or more child test cases) + +```exercism/note +You can identify the types of an element by checking for the presence of fields that are exclusive to one type of element. +Probably the best way to do this is using the `"cases"` key, which is only present in test case groups. +``` + +Here is an example of nested test cases: + +````json +{ + "cases": [ + { + "uuid": "e9c93a78-c536-4750-a336-94583d23fafa", + "description": "data is retained", + "property": "data", + "input": { + "treeData": ["4"] + }, + "expected": { + "data": "4", + "left": null, + "right": null + } + }, + { + "description": "insert data at proper node", + "cases": [ + { + "uuid": "7a95c9e8-69f6-476a-b0c4-4170cb3f7c91", + "description": "smaller number at left node", + "property": "data", + "input": { + "treeData": ["4", "2"] + }, + "expected": { + "data": "4", + "left": { + "data": "2", + "left": null, + "right": null + }, + "right": null + } + } + ] + } + ] +} +``` ```exercism/caution If your track does not support grouping tests, you'll need to: - Traverse/flatten the `cases` hierarchy to end up with only the innermost (leaf) test cases - Combine the test case description with its parent description(s) to create a unique test name -``` +```` #### Input and expected values From 48590c68a9ee5b2b4e10819eaebb245d70be685f Mon Sep 17 00:00:00 2001 From: Erik Schierboom Date: Thu, 18 Apr 2024 07:49:48 +0200 Subject: [PATCH 20/20] Update building/tooling/test-generators.md Co-authored-by: mk-mxp <55182845+mk-mxp@users.noreply.github.com> --- building/tooling/test-generators.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/building/tooling/test-generators.md b/building/tooling/test-generators.md index 7db02615..b66c289c 100644 --- a/building/tooling/test-generators.md +++ b/building/tooling/test-generators.md @@ -154,7 +154,7 @@ Probably the best way to do this is using the `"cases"` key, which is only prese Here is an example of nested test cases: -````json +```json { "cases": [ {