feat: implement test suites on computed metrics#16
Conversation
ee57770 to
c7c1e24
Compare
42ed54b to
de3be90
Compare
de3be90 to
b00f830
Compare
moreirathomas
left a comment
There was a problem hiding this comment.
Really nice! It offers a great solution to the feature request 🙂
Question: do we continue supporting template outputs?
If it was used to answer the need for testing capabilities, this probably is a better replacement?
| "strings" | ||
| "time" | ||
|
|
||
| "github.com/benchttp/engine/internal/cli/ansi" |
There was a problem hiding this comment.
Nitpick: I'm not sure about this dependency.
runner/internal/... depends on internal/cli/....
Yet internal/cli depends on runner.
It is ok because it is different packages. Still I find it strange because as ansi is nested in cli.
We may argue that it is strange that a core package (report) has io/visualization concerns.
Although, not deal breaker at all for the pr.
There was a problem hiding this comment.
We may argue that it is strange that a core package (report) has io/visualization concerns.
I agree, this is the root of the problem to me. It's ok having a Report.String but it should not be specific to a CLI with ansi codes.
Probably a good candidate for refactoring in #26
0f37ce8 to
fe3f93c
Compare
moreirathomas
left a comment
There was a problem hiding this comment.
Great refactoring since last viewed 👍
63f6014 to
29b0c26
Compare
- create package runner/internal/tests
- add result to server reponse (auto) - display summary in CLI
- delegate metrics comparison logics to package metrics - remove usage of metric getter - use tests.Case as config.Global.Tests input - adapt TestPredicate to new tests API - several renamings: - metrics.Metric -> metrics.Source - tests.Input -> tests.Case - tests.SingleResult -> tests.CaseResult
- e.g. durations: "85675992" -> "85.675992ms"
- PassValues and FailValues were inverted for comparison cases
Before: `a.Compare(b) == compare(a, b)` `0.Compare(1) == compare(0, 1) == SUP` Now: `a.Compare(b) == compare(a, b)` `0.Compare(1) == compare(1, 0) == INF` The new order makes more sense because `a` remains the main interest of the operation: when calling `a.Compare(b)` we want to get information about `a` (when compared to `b`)
- remove helper metricGetter - some renamings - add documenting comments
- implement errorutil.WithDetails - replace local implementations
- for field tests[n].target, JSON did not accept int values (but YAML did as they were read as string)
29b0c26 to
54791be
Compare
Description
Some exploration regarding specs and implementation of formal test suite runs, without the use of
template.Specs
Features
name,field,predicate,targetare required)fieldpredicatetargetregarding the chosenfieldError handling
Error messages for each situation
CLI run
Server run
{ "request": { "url": "https://example.com" }, "runner": { "requests": 5, "concurrency": 1 }, "tests": [ { "name": "minimum response time", "field": "MIN", "predicate": "GT", "target": "80ms" }, { "name": "maximum response time", "field": "MAX", "predicate": "LTE", "target": "120ms" }, { "name": "100% availability", "field": "FAILURE_COUNT", "predicate": "EQ", "target": "0" }, ] }{ "Metrics": {}, "Metadata": {}, "Tests": { "Pass": false, "Results": [ { "Input": { "Name": "minimum response time", "Field": "MIN", "Predicate": "GT", "Target": 80000000 }, "Pass": true, "Summary": "want MIN > 80ms, got 83.392463ms" }, { "Input": { "Name": "maximum response time", "Field": "MAX", "Predicate": "LTE", "Target": 120000000 }, "Pass": false, "Summary": "want MAX <= 120ms, got 433.415742ms" }, { "Input": { "Name": "100% availability", "Field": "FAILURE_COUNT", "Predicate": "EQ", "Target": 0 }, "Pass": true, "Summary": "want FAILURE_COUNT == 0, got 0" } ] } }Demo
Changes
Linked issues
Notes