I have a complex set of integration tests that uses WWW :: MacKay of Pearl to run the web app and Depending on the results, the specific combination of data there are more than 20 sub-ratios which make the logic of the test, the loop through the data, etc. Each test runs multiple test sub-routines on a different dataset.
The web app is not correct, because many bug tests fail due to very specific combinations of data but these combinations are quite rare that our team does not bother to fix the bugs for a long time. Will do; Creating many other new features takes priority.
So what should I do with unsuccessful tests? This is a combination of several dozen tests for combination of data. 1) I can not fail it because then the entire test suite will fail 2) If we comment on them, then it means that we do not remember to make that test for all other datasets. 3) I can add flags to a specific dataset which fails, and if the flag is set then the test is not run, but then I am going through extra flags at all places in my test suburbines.
What is the easiest and easiest way to do this? Or clean and mutually exclusive?
This is the same.
With a todo block, the tests inside are likely to fail. Tests will run more commonly, but will print special flags, which show that they are "todo" Test :: Harness will explain the failures as they recover. Anything must be successful, it will report it as an unexpected success, then you know that whatever you were doing has been done and can remove Todo's flag.
The good part about TED tests is that it opposes only the block of tests, it is like having a programmatic toy list. You know how much work has been left to work, what bugs do you know about, and when you are okay you will know immediately.
Once a teddy test begins, remove the block when the block is empty outside the block.
Comments
Post a Comment