clean-test issueshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues2020-10-02T20:57:01+02:00https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/20Add CI for different platforms2020-10-02T20:57:01+02:00Camil StapsAdd CI for different platformsWe should prevent build failures like the one caused by clean-platform#100 at the moment. For starters, add CI for x64 Windows and x86 Linux. Build failures are annoying because they are urgent.We should prevent build failures like the one caused by clean-platform#100 at the moment. For starters, add CI for x64 Windows and x86 Linux. Build failures are annoying because they are urgent.Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/16add function preconditions2019-10-11T13:14:33+02:00Steffen Michelsadd function preconditionsFor `Math.Geometry.normalize` I had to use a precondition to make tests succeed:
```
/**
* Normalizes an angle.
*
* @param the angle to normalize
* @result the normalized angle
* @property normalized degree range: A.angle :: Angle:...For `Math.Geometry.normalize` I had to use a precondition to make tests succeed:
```
/**
* Normalizes an angle.
*
* @param the angle to normalize
* @result the normalized angle
* @property normalized degree range: A.angle :: Angle:
* (abs deg <= toReal (maxint/365)) ==> (0.0 <=. degNorm /\ degNorm <=. 360.0)
* with
* deg = toDeg angle
* degNorm = toDeg (normalize angle)
* @property normalized radian range: A.angle :: Angle:
* (abs deg <= toReal (maxint/365)) ==> (0.0 <=. radNorm /\ radNorm <=. 2.0 * pi)
* with
* deg = toDeg angle
* radNorm = toRad (normalize angle)
* @property idempotence: A.angle :: Angle:
* (abs deg <= toReal (maxint/365)) ==> normalize angle ~~ normalize (normalize angle)
* with
* deg = toDeg angle
*/
normalize :: !Angle -> Angle
```
I'd be great to be able to state the precondition (`(abs deg <= toReal (maxint/365)) with deg = toDeg angle`) only once. Additionally, then this precondition is clearly documented. The precondition would have to be added to all properties of the function and there are probably some pitfalls when doing this.
For the old-style documentation maybe we can use `@param-precondition` after the `@param` it refers to. For the declarative style documentation I'm not sure yet.Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/12Allow spaces in @property names2019-10-11T13:14:32+02:00Camil StapsAllow spaces in @property nameshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/13Allow testing of class instances with @property2019-10-11T13:14:32+02:00Camil StapsAllow testing of class instances with @propertyhttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/14Allow testing of generic functions using @property2019-10-11T13:14:33+02:00Camil StapsAllow testing of generic functions using @propertyhttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/9colour Passed/Failed/Skipped in human readable output2018-04-16T20:30:52+02:00Steffen Michelscolour Passed/Failed/Skipped in human readable outputThis makes it easier to find failed tests.This makes it easier to find failed tests.https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/15Documentation: add example tests2020-09-26T15:36:49+02:00Markus KlinikDocumentation: add example testsI'd like to write my own unit tests, but I'm having a hard time figuring out how to use the new test framework.
Some small but complete examples would be really helpful, ready to compile and run. I'm thinking about an example with a pur...I'd like to write my own unit tests, but I'm having a hard time figuring out how to use the new test framework.
Some small but complete examples would be really helpful, ready to compile and run. I'm thinking about an example with a pure clean function, one with Gast properties, and maybe one with iTasks.
Ideally these examples can be used as templates, so that when I start a new project, I copy them to a test subdirectory in my project, import my modules, and start testing.https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/11Enhance bootstrap of generated test modules2019-10-11T13:14:32+02:00Camil StapsEnhance bootstrap of generated test modulesThings that should be imported automatically:
- [x] The `toString` instance of `{#Char}` (for `name`)
- [x] The tested module itself
- [ ] `derive` of test generation, JSON encoding etc. for tested typesThings that should be imported automatically:
- [x] The `toString` instance of `{#Char}` (for `name`)
- [x] The tested module itself
- [ ] `derive` of test generation, JSON encoding etc. for tested typeshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/19gDiff for JSONNode utterly broken2019-10-11T13:15:07+02:00Camil StapsgDiff for JSONNode utterly brokenWith https://gitlab.science.ru.nl/clean-and-itasks/iTasks-SDK/issues/19#note_49734 I realized that clean-test's gDiff for JSONNode is utterly broken. Added/removed values are not indented properly. This is probably not possible, because ...With https://gitlab.science.ru.nl/clean-and-itasks/iTasks-SDK/issues/19#note_49734 I realized that clean-test's gDiff for JSONNode is utterly broken. Added/removed values are not indented properly. This is probably not possible, because you cannot recognize whether a JSON array is the representation of an ADT or of a list. Probably we should just resort to the derived version.Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/7give feedback before test programs terminate2019-10-11T13:14:32+02:00Steffen Michelsgive feedback before test programs terminateCurrently, output is only produced before a test program finished. It would be nice to already see which tests failed/passed as soon as test programs produce output.Currently, output is only produced before a test program finished. It would be nice to already see which tests failed/passed as soon as test programs produce output.https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/3Hiding messages2019-10-11T13:14:31+02:00Camil StapsHiding messagesIt should be possible to hide certain messages:
- Start events
- Pass / Fail / Skip / Lost testsIt should be possible to hide certain messages:
- Start events
- Pass / Fail / Skip / Lost testsCamil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/5improve readability of diff2019-10-11T13:14:32+02:00Steffen Michelsimprove readability of diffThe readability of the diffs has to be improved to make them workable for the complex and large values. I'll work on this, but let's first discuss the points @cstaps. I hope that everything can be done by providing an adapted version of ...The readability of the diffs has to be improved to make them workable for the complex and large values. I'll work on this, but let's first discuss the points @cstaps. I hope that everything can be done by providing an adapted version of `diffToConsole :: [Diff] -> String`.
As a starting point consider this property: `[1,2,3] =.= [1,0,3]`. The output currently is:
```
~(_Cons
1
~ (_Cons
- 2
+ 0
(_Cons
3
_Nil
)
~ )
~)
```
1) The +/- do not make much sense. I want to know which values are present at the left and which at the right side. I suggest using >/<.
```
~(_Cons
1
~ (_Cons
< 2
> 0
(_Cons
3
_Nil
)
~ )
~)
```
2) The closing brackets do not add much, because we already use indentation to show scope. The problem is that they waste a line. For large values a more compact view is preferable.
```
~_Cons
1
~ _Cons
< 2
> 0
_Cons
3
_Nil
```
3) This is obviously not ideal for lists. Not sure whether we need a more general solution for specific data structures, but for lists I really need something on short term, as they are so commonly used. A dirty solution would be to match the constructor names and do something different for lists in `diffToConsole`. But maybe it's better to extend `gDiff`?
A possible proposal of how to print this, which avoids using a closing ] on a separate line:
```
~[]
1
< 2
> 0
3
```Steffen MichelsSteffen Michelshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/1Output format: human-readable CLI2019-10-11T13:14:31+02:00Camil StapsOutput format: human-readable CLILeft to do:
- [ ] Show more information about failed tests (requires clean-platform!109)Left to do:
- [ ] Show more information about failed tests (requires clean-platform!109)Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/2Output format: iTasks app2020-09-26T15:41:38+02:00Camil StapsOutput format: iTasks apphttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/18output of child process may get lost2019-10-11T13:15:07+02:00Steffen Michelsoutput of child process may get lostThere is a race condition in `redirect`. It occurs if between the call to `readPipeBlockingMulti` and `checkProcess`, the child process writes some data and terminates. I think, `readPipeBlockingMulti` should be called a last time after ...There is a race condition in `redirect`. It occurs if between the call to `readPipeBlockingMulti` and `checkProcess`, the child process writes some data and terminates. I think, `readPipeBlockingMulti` should be called a last time after the child process terminates.Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/4re-run failed tests2019-10-11T13:14:32+02:00Steffen Michelsre-run failed testsThe test runner should be able to keep track of failed tests (in a tmp file?) and provide an option to only re-run the previously failed tests. For example:
```
$./runTests --rerun_failed
No previous results found, run all tests...
test1...The test runner should be able to keep track of failed tests (in a tmp file?) and provide an option to only re-run the previously failed tests. For example:
```
$./runTests --rerun_failed
No previous results found, run all tests...
test1: Passed
test2: Failed (expected 3 got 2)
test3: Failed (expected 1 got 3)
There are 2 failing tests!
# after changing some code and adding a new test ...
$./runTests --rerun_failed
test2: Passed
test3: Failed (expected 1 got 4)
test4 (new): Failed (expected 0 got 4)
There are still 2 failing tests!
# after changing code some more...
$./runTests --rerun_failed
test3: Passed
test4: Passed
Repeat remaining tests...
test1: Passed
test2: Passed
All tests passed.
```
For this we have to agree on a format to tell test programs which tests to run. I was thinking of a simple command line option `--exclude_test ...`. Excluding tests makes more sense if we assume that new tests could be added between runs.Camil StapsCamil Stapshttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/8run tests in parallel2020-09-26T15:40:46+02:00Steffen Michelsrun tests in parallelAdd an option "-j N" to run at most N test programs in parallel. If we execute tests one-by-one (also see #6) this can also be done in a more fine-grained way, i.e. executing multiple tests from the same test program in parallel.Add an option "-j N" to run at most N test programs in parallel. If we execute tests one-by-one (also see #6) this can also be done in a more fine-grained way, i.e. executing multiple tests from the same test program in parallel.https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/10set return code to 1 in case tests fail2018-04-16T20:50:25+02:00Steffen Michelsset return code to 1 in case tests failhttps://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/6"stop on first failed test" option2019-10-11T13:14:32+02:00Steffen Michels"stop on first failed test" optionWith this option the test runner should stop testing on the first failed test. This is handy for interactive usage, when the user anyhow only looks and tries to fix the first problem.
I guess the implementation requires to have a single...With this option the test runner should stop testing on the first failed test. This is handy for interactive usage, when the user anyhow only looks and tries to fix the first problem.
I guess the implementation requires to have a single call to the test programs for each test. I however don't see a real downside of this.
An alternative might be to add the option to `Testing.Options`, but then each test framework would have to implement it. If we can avoid this and keep then list of required options small, we should do that I think.
What do you think, @cstaps?https://gitlab.science.ru.nl/clean-and-itasks/clean-test/-/issues/17Too many dependencies2019-10-11T13:15:07+02:00Bas LijnseToo many dependenciesI tried to add the test tools to the nightly builds, but I can't get them too work (on all platforms) because of all the extra dependencies. If we want to use this test runner as the standard tool for testing the clean packages it should...I tried to add the test tools to the nightly builds, but I can't get them too work (on all platforms) because of all the extra dependencies. If we want to use this test runner as the standard tool for testing the clean packages it should have minimal dependencies.