[From the sandbox] Good, bad, angry - testing in the novice project

[From the sandbox] Good, bad, angry - testing in the novice project


Preface: The university received a task - to assemble a scrum team, select a project and work on it during the semester. Our team has chosen to develop a web application (react + flask). In this article I will try to tell you what the tests should have been and analyze what we did on the backend.


< br/>

Expectations


Tests are necessary, first and foremost, in order to convince everyone (including myself) that the program is behaving as it should, in test situations . Secondly, they will ensure that the test-covered code works in the future. Writing tests is a useful process, because in its process you can often stumble upon problem areas, recall some extreme cases, see problems with interfaces, etc.


When developing any systems, you need to remember at least three types of tests:


  • Unit Tests - tests that verify that functions do the right thing.
  • Integration tests - tests that verify that several functions together do the right thing.
  • System Tests - tests that verify that the entire system is doing what it needs.

A one of the posts from google published a table with the characteristics of the three types of tests. "Small", "Medium" and "Large".



Unit Tests


Unit tests correspond to small tests — they should be fast and check only the correctness of specific parts of the program. They should not access the database, should not work in a complex multi-threaded environment. They control compliance with specifications/standards, often they are assigned the role of regression tests .


Integration tests


Integration tests are those tests that may affect several modules and functions. Such tests require more time and may require a special environment. They are necessary to ensure that the individual modules and functions are able to work with each other. Those. unit tests verify the compliance of real interfaces with the expected ones, and the integration tests ensure that the functions and modules correctly interact with each other.


System Tests


This is the highest level of automatic testing. System tests verify that the whole system works at all, that its parts perform their tasks and are able to interact correctly.


Why watch out for types


Usually, as the project grows, the code base will grow. The duration of automatic checks will increase, supporting a large number of integration and system tests will become more complicated and more difficult. Therefore, developers are faced with the task of minimizing the necessary tests. To do this, you should try to use unit tests where possible and reduce integration using mocks.


Reality


Typical API test


  def test_user_reg (client):
  return json.loads (
  client.post (url, json = data, content_type = 'application/json'). data
  )

  response = client.post ('api/user.reg', json = {
  'email': 'name@mail.ru',
  'password': 'password1',
  'first_name': 'Name',
  'last_name': 'Last Name'
  })

  data = json.loads (response.data)

  assert data ['code'] == 0  

From official documentation flask we get a ready-made recipe for initializing the application and creating a database. Here is the work with the database. This is not a modular, but not a system test. This is an integration test that uses a database test application.


Why integration, not modular? Because in request processing interaction with flask, with ORM, with our business logic is carried out. Handlers act as a unifying link of other parts of the project, therefore, it is not too easy to write unit tests for them (it is necessary to replace the database, internal logic with mocks) and not too expediently (integration tests will check similar aspects - did the necessary functions be called? "," the data was correctly received? ", etc.).


Names and grouping of tests


  def test_not_empty_errors ():
  assert validate_not_empty ('email', '') == ('email is empty',)
  assert validate_not_empty ('email', '') == ('email is empty',)
  assert validate_email_format ('email', "") == ('email is empty',)
  assert validate_password_format ('pass', "") == ('pass is empty',)
  assert validate_datetime ('datetime', "") == ('datetime is empty',)  

In this test, all conditions for the "small" tests are met - the behavior of the function without dependencies is checked for compliance with the expected. But the design raises questions.


A good practice is to write tests that focus on a specific aspect of the program. In this example, there are different functions - validate_password_format , validate_password_format , validate_datetime . Grouping checks is not based on results, but on test objects.


The test name ( test_not_empty_errors ) does not describe the test object (which method is checked), only the result (errors are not empty). This method was worth calling test__validate_not_empty__error_on_empty . This title describes what is being tested, and what result is expected. This applies to almost every test name in the project due to the fact that time was not devoted to the discussion of naming conventions for tests.


Regression tests


  def test_datetime_errors ():
  assert validate_datetime ('datetime', '0123-24-31T; 431') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2018-10-18T20: 21: 21 + -23: 1') == ('datetime is invalid',)

  assert validate_datetime ('datetime', '2015-13-20T20: 20: 20 + 20: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-02-29T20: 20: 20 + 20: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-12-20T25: 20: 20 + 20: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-12-20T20: 61: 20 + 22: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-12-20T20: 20: 61 + 20: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-12-20T20: 20: 20 + 25: 20') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-12-20T20: 20: 20 + 20: 61') == ('datetime is invalid',)
  assert validate_datetime ('datetime', '2015-13-35T25: 61: 61 + 61: 61') == ('datetime is invalid',)  

This test initially consisted of the first two assert . After that, a "bug" was discovered - instead of checking the date, only the compliance with the regular expression was checked, i.e. 9999-99-99 was considered a normal date. The developer has fixed it. Naturally, after fixing the bug, you need to add tests to prevent regression in the future. Instead of adding a new test in which to write, why this test exists, checks have been added to this test.


What should a new test be called to which to add a check? Probably test__validate_datetime__error_on_bad_datetime .


Ignoring tools


  def test_get_providers ():
  class Tmp:
  def __init __ (self, id_external, token, username):
  self.id_external = id_external
  self.token = token
  self.username = username

  ...  

Tmp ? This is a substitution of an object that is not used in this test. The developer does not seem to know about the existence of @patch and MagicMock from unittest.mock . No need to complicate the code, solving problems naively, when there are more adequate tools.


There is a test that initializes services (in the database), uses the application context.


  def test_get_posts (client):
  def fake_request (* args, ** kwargs):
  return [one, two]

  handler = VKServiceHandler ()
  handler.request = fake_request

  services_init ()

  with app.app_context ():
  posts = handler.get_posts (None)

  assert len ​​(posts) == 2
  

You can exclude from the test work with the database and context by simply adding one @patch .


  @ patch ("mobius.services.service_vk.Service")
 def test_get_posts (mock):
  def fake_request (* args, ** kwargs):
  return [one, two]

  handler = VKServiceHandler ()
  handler.request = fake_request

  posts = handler.get_posts (None)

  assert len ​​(posts) == 2  

Results


  • To develop high-quality software, you need to write tests. At a minimum, to make sure that you wrote what you need.
  • For bulk information systems, tests are even more important - they allow you to avoid unwanted interface changes or the return of bugs.
  • In order for written tests not to turn into a mass of strange methods over time, one should pay attention to the agreement on naming tests, adhere to good practices, minimize tests.
  • Unit tests can be a great tool during development. They can be run after every small change to make sure nothing is broken.

A very important note is that the tests do not guarantee performance or the absence of bugs. Tests ensure that the actual result of the program (or its part) is expected. In this case, the verification occurs only those aspects for which the tests were written. Therefore, when creating a quality product, one should not forget about other types of testing.

Source text: [From the sandbox] Good, bad, angry - testing in the novice project