Replies: 1 comment
-
IterationSetup/Cleanup is your only option. Unfortunately it's not great for micro/nano-benchmarks. I proposed a solution a while back (#1782), but it was rejected. You can do some workarounds to manually increase the number of operations (see in that issue), but it's not portable.
Each benchmark is ran in isolation, so order does not matter. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I've looked through all the docs I can find, and maybe I'm just looking for the wrong phrase or something simple like that, but I can't seem to find what I'm looking for. I have some code that I'm benchmarking, but I also want to verify some facts about the results. That verification should not factor into the benchmark measurements, though. Is there any way to draw a line in the code to say "everything up to here is what you were supposed to be measuring, but now I'm going to do some extra work", other than putting my assertions in the IterationCleanup? Or is that the proper place to put this sort of code?
Alternatively, is there a way to specify the order that the benchmarks should run in? I wouldn't mind benchmarking the retrieval and verification steps separately, but the original benchmark must be executed first. Basically, I want to benchmark the individual parts of a multi-step process. Unit tests should not be order dependent, but this isn't really a unit test, is it? I want to be able to check code before and after changes in order to know that while I may have improved step 1, it was at the expense of making step 2 worse. The key is that I have to know, for sure, that step 1 ran before step 2.
Beta Was this translation helpful? Give feedback.
All reactions