Skip to content

Commit

Permalink
README.md: update xgo style section
Browse files Browse the repository at this point in the history
After reading exercism/discussions#121 I
though it might be beneficial to add something to the README regarding
benchmarking on different machines. This led, as these things do, to a
small, opinionated rewrite of the xgo style section.
  • Loading branch information
robphoenix committed Feb 27, 2017
1 parent 4441c22 commit 7d24551
Showing 1 changed file with 84 additions and 44 deletions.
128 changes: 84 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,52 +33,92 @@ in the x-common repository. This describes how all the language tracks are put
the common metadata, and high-level information about contributing to existing problems and adding new problems.
In particular, please read the [Pull Request Guidelines](https://github.com/exercism/x-common/blob/master/CONTRIBUTING.md#pull-request-guidelines) before opening a pull request.

## Problem Versioning

Each problem defines a `const targetTestVersion` in the test program, and validates that the solution has defined a matching value `testVersion`. Any xgo developer that changes the test program or test data increments `targetTestVersion`.

The benefit of all this is that reviewers can see which test version a posted solution was written for and be spared confusion over why an old posted solution might not pass current tests.

Notice that neither the `testVersion` nor the `targetTestVersion` is exported. This is so that golint will not complain about a missing comment. In general, adding tests for unexported names is considered an anti-pattern, but in this case the trade-off seems acceptable.

## Xgo style

Let's walk through the first problem in `config.json`, leap. Cd down into the leap directory now, there are two
files there, example.go and leap_test.go. Example.go is a reference solution. It is a valid solution that CI can
run tests against. Solvers generally will not see it though. Files with "example" in the file name are skipped
by `fetch`. Because of this, there is less need for this code to be a model of style, expression and
readability, or to use the best algorithm. Examples can be plain, simple, concise, even naïve, as long as they
are correct. The test program though, is fetched for the solver and deserves attention for consistency and
appearance.

Leap_test.go uses a data-driven test. Test cases are defined as data, then a test function iterates over
the data. Identifiers within the method appear in actual-expected order as described at
[Useful Test Failures](https://github.com/golang/go/wiki/CodeReviewComments#useful-test-failures).
Here the identifier 'observed' is used instead of actual. That's fine. More common are words 'got' and 'want'.
They are clear and short. Note
[Useful Test Failures](https://github.com/golang/go/wiki/CodeReviewComments#useful-test-failures) is part of
[Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments). Really we like most of the
advice on that page.

Also here in leap_test.go is a benchmark. We throw in benchmarks because they're interesting, and because it's
idiomatic in Go to think about performance. There is no critical use for these though. Usually, like this one in
leap, they will just bench the combined time to run over all the test data rather than attempt precise timings
on single function calls. They are useful if they let the solver try a change and see a performance effect.

Finally we provide the stub file `leap.go` as a starting point for solutions.
Not all exercises need do this; this is most helpful in the early exercises for newcomers to Go.
By convention, the stub file for an exercise with slug `exercise-slug` must be named `exercise_slug.go`.
This is because CI needs to delete stub files to avoid conflicting definitions.

## Xgo compared

We do a few things differently than the other language tracks. In Go we generally have all tests enabled and do
not ask the solver to edit the test program, to enable progressive tests for example. `Testing.Fatal`, as seen
in leap_test.go, will stop tests at the first problem encountered so the solver is not faced with too many errors
all at once.

We like errors in Go. It's not idiomatic Go to ignore invalid data or have undefined behavior. Sometimes our
Go tests require an error return where other language tracks don't.
Let's walk through the first exercise, *hello-world*. Navigate into the *hello-world*
directory, you'll see there are three files there; *example.go*, *hello_test.go*,
and *hello_world.go*.

```sh
~/exercism/go/hello-world
$ tree
.
├── example.go
├── hello_test.go
└── hello_world.go

0 directories, 3 files
```

### Example solutions

*example.go* is a reference solution. It is a valid solution that [Travis](https://travis-ci.org/exercism/xgo),
the CI (continuous integration) service, can run tests against. Solvers generally
will not see it though. Files with *"example"* in the file name are skipped by
the `exercism fetch` command. Because of this, there is less need for this code
to be a model of style, expression and readability, or to use the best algorithm.
Examples can be plain, simple, concise, even naïve, as long as they are correct.
The test file though, is fetched for the solver and deserves attention for consistency
and appearance.

### Tests

*hello_test.go* uses a data-driven test. Test cases are defined as data, then a
test function iterates over the data. Identifiers within the method appear in
actual-expected order as described at [Useful Test Failures](https://github.com/golang/go/wiki/CodeReviewComments#useful-test-failures).
Here the identifier *"observed"* is used instead of actual. That's fine. More
common are words *"got"* and *"want"*. They are clear and short. Note
[Useful Test Failures](https://github.com/golang/go/wiki/CodeReviewComments#useful-test-failures)
is part of [Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments).
Really we like most of the advice on that page.

In Go we generally have all tests enabled and do not ask the solver to edit the
test program, to enable progressive tests for example. `t.Fatalf`, as seen
in [*hello_test.go*](https://github.com/exercism/xgo/blob/master/exercises/hello-world/hello_test.go#L26),
will stop tests at the first failure encountered, so the solver is not faced with
too many failures all at once.

We like errors in Go. It's not idiomatic Go to ignore invalid data or have undefined
behavior. Sometimes our Go tests require an error return where other language
tracks don't.

### Benchmarks

In most test files there will also be benchmark tests, for example in the
[clock exercise](https://github.com/exercism/xgo/blob/master/exercises/clock/clock_test.go#L75-L91).
In Go, benchmarking is a first-class citizen of the testing package. We throw in
benchmarks because they're interesting, and because it is idiomatic in Go to think
about performance. There is no critical use for these though. Usually they will
just bench the combined time to run over all the test data rather than attempt
precise timings on single function calls. They are useful if they let the solver
try a change and see a performance effect. It is important, though, to keep in
mind that benchmark tests are reflective of the machine they are run on. Benchmark
tests run against the same exercise solution but on machines with different specs can
produce different results. So don't always expect to get the same results as
another reviewer on exercism.io.

### Stub files

Finally there is the stub file `hello-world.go` as a starting point for solutions.
Not all exercises need do this; this is most helpful in the early exercises for
newcomers to Go. By convention, the stub file for an exercise with slug `exercise-slug`
must be named `exercise_slug.go`. This is because CI needs to delete stub files
to avoid conflicting definitions.

### Problem Versioning

Each problem defines a `const targetTestVersion` in the test file, and validates
that the solution has defined a matching value `testVersion`. Any xgo developer
that changes the test file or test data must increment `targetTestVersion`.

The benefit of all this is that reviewers can see which test version a posted
solution was written for and be spared confusion over why an old posted solution
might not pass current tests.

Notice that neither the `testVersion` nor the `targetTestVersion` is exported.
This is so that golint will not complain about a missing comment. In general,
adding tests for unexported names is considered an anti-pattern, but in this
case the trade-off seems acceptable.

## Generating test cases

Expand Down

0 comments on commit 7d24551

Please sign in to comment.