-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarks #112
Open
ethomson
wants to merge
11
commits into
main
Choose a base branch
from
ethomson/benchmark
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Benchmarks #112
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Allow the resulting application names to be configured by the user, instead of hardcoding `clar_suite.h` and `clar.suite`. This configuration will also customize the struct names (`clar_func`, etc.) Also allow the test names to be configured by the user, instead of hardcoding `test_` as a prefix. This allows users to generate test functions with uniquely prefixed names, for example, if they were generating benchmark code instead of unit tests.
Tests can now have optional metadata, provided as comments in the test definition. For example: ``` void test_spline__reticulation(void) /*[clar]: description="ensure that splines are reticulated" */ { ... } ``` This description is preserved and produced as part of the summary XML.
ethomson
force-pushed
the
ethomson/benchmark
branch
from
January 19, 2025 15:18
f59b798
to
8c24def
Compare
Move the elapsed time calculation to `counter.h`, and use high-resolution monotonic performance counters on all platforms.
Refactor the `ontest` callback (which is implicitly test _finished_) into a test started and test finished callback. This allows printers to show the test name (at start) and its conclusion in two steps, which is advantageous for users to see the current test during long-running test executions. In addition, rename `onsuite` to `suite_start` for consistency.
Allow tests to specify that they should have multiple runs. These runs all occur within a single initialization and cleanup phase, and is useful for repeatedly testing the same thing as quickly as possible. The time for each run is recorded, which may be useful for benchmarking that test run.
An application can provide _benchmarks_ instead of _tests_. Benchmarks can run multiple times, will calculate the times of each run, and some simple additional data (mean, min, max, etc). This information will be displayed and will optionally be emitted in the summary output. Test hosts can indicate that they're benchmarks (not tests) by setting the mode before parsing the arguments. This will switch the output and summary format types.
In benchmark mode, when the number of runs was not explicitly specified in the test itself, run a reasonably number of iterations. We do this by measuring one run of the test, then using that data to determine how many iterations we should run to fit within 3 seconds. (With a minimum number of iterations to ensure that we get some data, and a maximum number to deal with poor precision for fast test runs.) The 3 second number, and 10 iteration minimum, were chosen by consulting the hyperfine defaults.
For multi-run tests (benchmarks), we introduce a `reset` function. By default, between each run of a test, the initialization will be called at startup, and the cleanup will be called at finish. A benchmark may wish to set up multi-run state at the beginning of the invocation (in initialization), and keep a steady state through all test runs. Users can now add a `reset` function so that initialization occurs only at the beginning of all runs.
Well-written clar tests (those that clean up after themselves) are capable of running in benchmark mode. Provide it as an option.
ethomson
force-pushed
the
ethomson/benchmark
branch
from
January 19, 2025 21:48
3cf1322
to
0290399
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Introduce a simple benchmark system for clar. This is an extension of #74, but adding some additional capabilities.