-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmark testing to Python and other tracks (?) #121
Comments
I really wish I knew why the Go track decided to add the benchmarks. All I can see is that they were added in exercism/exercism#1416 without explanation (that I can see), but that in exercism/go@9fd6892 an explanation was retrofitted: "it's idiomatic in Go to think about performance" And what do other languages say?
There have been times when I wished for benchmarks in other languages, especially for the problems for which there are multiple approaches. |
I do really think benchmarks are something useful, but for most languages they are complicated to run and do require some external dependencies or providing some self made bench-runner, while in For erlang it were easy to simply measure the execution time of a single function call, I could as well do that in a "loop" about a 1000 times or even more often and spit out some average or median time. But I had to either roll the necessary functions for every exercise or I had to add a library to the project file which is not needed by most people, since they either don't care for benchmarks or don't discover them. Also another thing that bugs me, it is hard to actually compare the benchmarks. Lets take go as an example. Someone brags with his benchmark in the comment, you have the feeling that your algorithm is much tidier and should be faster as well, so you run your benches locally. OK, about the same result. But what were his machines specs? So lets pull his source code and run it against the bench. What was that command right again to pull other peoples solutions? Ah yes, its written down here, Copy, Paste, run. OK, on my machine his algo takes about 4 times longer. This tells me nothing! But only that his algo is slow on my machine, but how the other way round? Maybe his code gets better optimisations on his side, but my code doesn't, because he uses ARM instead of x86? Maybe he was just out of luck and his runs are slowed down on my machine due to solar flares and were much faster tomorrow? |
Yeah I've experienced what @NobbZ describes and have spent time puzzling over the disparity between someone else's benchmark times and my own. Though it's interesting to compare one's own iterations, especially with significant changes to an algorithm. |
After reading exercism/discussions#121 I though it might be beneficial to add something to the README regarding benchmarking on different machines. This led, as these things do, to a small, opinionated rewrite of the xgo style section.
After reading exercism/discussions#121 I though it might be beneficial to add something to the README regarding benchmarking on different machines. This led, as these things do, to a small, opinionated rewrite of the xgo style section.
After reading exercism/discussions#121 I though it might be beneficial to add something to the README regarding benchmarking on different machines. This led, as these things do, to a small, opinionated rewrite of the xgo style section.
I think this is useful. I actually think that benchmarks are useful in two ways:
I think the discussions about different machines and architectures is probably useful for people to have—the fact that a benchmark depends on the machine it's running on as well as the data that you put into it is something that I think not everyone is aware of. |
I like the idea and you are very welcome in the python track! |
I'm going to go ahead and close this, and leave it up to maintainers whether or not they want to add benchmarks, and that if we do add benchmarks, that we emphasize that they're primarily a bonus feature to use as the basis for exploration, not an indication that performance is the most important concern. |
A friend of mine who has completed the Go track pointed out that there are benchmarks in addition to unit tests for Go, but not for Python and some other languages. I am currently working through the Python track and thought it would be great if it also had benchmark testing.
I would be happy to help on this if it's something Exercism folks would like to move forward on.
The text was updated successfully, but these errors were encountered: