Camil Staps (9c791df6) at 13 Oct 10:24
This allows the test programmer to give possible reasons for test failure when counter-examples are found. For example usage, see clean-test-properties!7.
Steffen Michels (082bcacb) at 13 Oct 10:24
Merge branch 'add-possible-fail-reasons-for-counterexamples' into '...
... and 1 more commit
This allows the test programmer to give possible reasons for test failure when counter-examples are found. For example usage, see clean-test-properties!7.
Camil Staps (9c791df6) at 11 Oct 21:38
Add possible fail reasons to counter-examples to give the user hints
Camil Staps (e6217df6) at 01 Oct 10:25
Closes #16
I used the program below to check for duplicates. It finds only nan, 2.2250738585072e-308, 16, 32, 64, 128, 256, and their negatives in the first 1,000,000 values; I think this is acceptable.
Main changes made:
quotients
does not generate 1 and -1, these are added as constants at the top (they could also be added in squareRoots
, but it makes sense to add them at the very start of the list).quotients
does not generate whole primes (i.e. the second list is now prims
, not [1:prims]
), because whole primes are generated in squareRoots
.These changes should only have removed duplicate values, however:
squareRoots
has to generate the list up to their square.import qualified Data.SetBy
Start = dups (take 1000000 (ggen{|*|} genState)) 'Data.SetBy'.newSet
where
dups :: ![Real] !('Data.SetBy'.SetBy Real) -> [Real]
dups [] _ = []
dups [r:rs] seen
| 'Data.SetBy'.memberBy lt r seen
= [r:dups rs seen]
= dups rs ('Data.SetBy'.insertBy (<) r seen)
lt a b
| approximatelyEqual a b
= False
= a < b
I see especially many 1s and -1s (using bent generation). I'm looking at the string representation, so it may be that the binary representation is different or there is a difference so small that it is not reflected in the string representation, but in any case more diversity would be good.
Steffen Michels (a072aa5d) at 01 Oct 10:25
Merge branch '16-more-unique-values-for-ggen-of-Real' into 'master'
... and 2 more commits
Closes #16
I used the program below to check for duplicates. It finds only nan, 2.2250738585072e-308, 16, 32, 64, 128, 256, and their negatives in the first 1,000,000 values; I think this is acceptable.
Main changes made:
quotients
does not generate 1 and -1, these are added as constants at the top (they could also be added in squareRoots
, but it makes sense to add them at the very start of the list).quotients
does not generate whole primes (i.e. the second list is now prims
, not [1:prims]
), because whole primes are generated in squareRoots
.These changes should only have removed duplicate values, however:
squareRoots
has to generate the list up to their square.import qualified Data.SetBy
Start = dups (take 1000000 (ggen{|*|} genState)) 'Data.SetBy'.newSet
where
dups :: ![Real] !('Data.SetBy'.SetBy Real) -> [Real]
dups [] _ = []
dups [r:rs] seen
| 'Data.SetBy'.memberBy lt r seen
= [r:dups rs seen]
= dups rs ('Data.SetBy'.insertBy (<) r seen)
lt a b
| approximatelyEqual a b
= False
= a < b
Camil Staps (e6217df6) at 01 Oct 10:20
Cleanup ggen{|Real|}
Camil Staps (7a0dbac6) at 01 Oct 10:14
Generate less duplicates in ggen{|Real|}; document this instance
Maybe I don't understand the point, but isn't this why primes are used?
Ah, that makes sense. Please, add a comment to explain why primes are used.
I think adding 1.0 and -1.0 once and filtering them out in the list you quoted would be a good solution.
This does however not really solve the problem as still combinations with equal quotient are generated (e.g.
1/2
,2/4
,4/8
, ...). Maybe filtering out all combinations of which thegcd
is not1
would be a good solution?
Maybe I don't understand the point, but isn't this why primes are used?
I will make the change for 1.0 and -1.0 and then have a look whether there are still other duplicates.
I think the problem is this line in Real
instance of ggen
:
[r \\ x <- diag [1:prims] [1:prims] (\n d.toReal n/toReal d), r <- [x,~ x]]
If both numbers are equal, the quotient is 1
. Bent generation prefers diversity in values of each of the combined dimensions above combinations of values. So it makes sense that equal numbers are combined more often than with skewed generation.
I'm not sure about the rationale of the generation function, as there is no documentation. So it's hard to propose an alternative. A straightforward way to improve the current definition would be to add 1.0
/-1.0
once and filter out combinations of equal numbers. This does however not really solve the problem as still combinations with equal quotient are generated (e.g. 1/2
, 2/4
, 4/8
, ...). Maybe filtering out all combinations of which the gcd
is not 1
would be a good solution?
I see especially many 1s and -1s (using bent generation). I'm looking at the string representation, so it may be that the binary representation is different or there is a difference so small that it is not reflected in the string representation, but in any case more diversity would be good.
Camil Staps (3eb255b3) at 19 Sep 14:47
This does not change ESMVizTool which is outdated (#3).
Camil Staps (c9512140) at 19 Sep 14:47
Merge branch 'new-maybe-type' into 'master'
... and 1 more commit
Camil Staps (3eb255b3) at 19 Sep 14:46
Use new maybe type