-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When using class based tests, setUp is not called for each hypothesis test #59
Comments
You're right, this is working as intended but needs to be better documented. There's actually a feature for this. but I've now realised I haven't documented that either. Sorry. If you use instead: class TestHypothesis(unittest.TestCase):
def setup_example(self):
self.test_set = set() This will be called before each example runs rather than before the whole function. I'll leave this open until I've sorted the documentation. |
Okay, then I'll be using |
Is there a corresponding hook when using from __future__ import division
import pytest
from hypothesis import given
import hypothesis.strategies as st
@pytest.fixture(scope='function')
def stateful():
return set()
@given(x=st.integers())
def test_foo(stateful, x):
if stateful:
raise ValueError("function-scoped fixture reused!")
stateful.add('kitten-fur gloves')
assert x < 1000 |
Sadly, no. As far as I can tell there's no way for me to integrate with py.test fixtures at the per example level. |
@thedrow the problem is that Hypothesis won't know how many tests it's going to run during the collection phase, it'll only know while running the tests. See pytest-dev/pytest#916 |
@DRMacIver, thanks for mentioning When you sort the documentation, a good example use of |
It would be nice if hypothesis actually wrapped each example in setUp() and tearDown(). For example, consider this mixed test case: class MixedTest(TestCase):
def test_something_without_hypothesis(self):
pass
@given(foos())
def test_something_with_hypothesis(self, foo):
pass As a developer, I want to wrap each test in my setup an teardown, but how I do so depends on how the test is implemented. If the test uses hypothesis, I use (setup|teardown)_example(). If the test doesn't use hypothesis, I use (setUp|tearDown). IMO, whether a test uses hypothesis or not should be an implementation detail. This is currently leading to hacks like this: class HypothesisTestCase(django.TestCase):
def setup_example(self):
self._pre_setup()
def teardown_example(self, example):
self._post_teardown()
def __call__(self, result=None):
testMethod = getattr(self, self._testMethodName)
if getattr(testMethod, u'is_hypothesis_test', False):
return unittest.TestCase.__call__(self, result)
else:
return dt.SimpleTestCase.__call__(self, result) The problem will occur when adding hypothesis to any 3rd party test case that contains setUp and tearDown logic. |
There's a UX angle to this. As a total newbie with from unittest import TestCase
from hypothesis import given
from hypothesis import strategies
class TestFoo(TestCase):
def setUp(self):
self.foo = 'meow'
@given(strategies.none())
def test_with_hypo(self, nada):
assert self.foo == 'meow'
self.foo = 'kitten'
def test_no_hypo(self):
assert self.foo == 'meow'
self.foo = 'kitten' |
Same issue and surprise here--I expected |
I scribbled down “docs” when I saw this email at 5am. I don’t think we document this behaviour anywhere (or if we do, I can’t find it) – getting it documented would be a good first step. |
Agreed. That is the exact use case I am using it for, and I would appreciate a "best practice" example |
I stumbled over the issue again, now. It may be OK to use Why does everybody cook his own soup? |
I was just writing some test for a custom data structure and had a class based test with a
setUp
function that initialized a fresh instance for each test (so I don't have to copy the code in each test). Some test would fail randomly most but not all executions. After investigating this I found thatsetUp
was simply not called for every hypothesis test which resulted in a "dirty" data structure and in term made some tests failing if the values came in the "wrong" order.Here is an example:
If I run this, I get the following output:
The reason for this is obivous: Since
setUp
was only called once the data structure got "dirty".I'm not really sure if this is something that needs to be fixed but I think it needs to be documented that you should not use
setUp
in this way (which in my opinion is perfectly fine) when using hypothesis.The text was updated successfully, but these errors were encountered: