Software Unit testing

From RidgeRun Developer Wiki

General Information

Kind of software tests

As you know, software testing includes many different forms of tests, here are some of the most important:

  • Unit tests
  • Integration tests
  • Regression tests
  • Acceptance tests
  • Performance tests

In this documentation we are going to empathize in Unit tests.

Main concepts about unit testing

  • Unit testing is a method by which individual units of source code are tested to determine if they are correctly working. In other words it is a strict, written contract that the piece of code must satisfy.
  • A unit is the smallest testable part of an application.
  • The goal of unit testing is to isolate each part of the program, and show that the individual parts are correctly working.

Clean code and unit tests

  • Use unit testing tools that ensures you a fast linking process; “If your unit testing tool is slowing down the tests, you need a new tool.”
  • Unit tests should link with smaller libraries, and make sure that the dependencies between those libraries be unidirectional, concrete to abstract, this in order to shorten their linking times.
  • Tests need to run fast, this under the premise of "slow tests aren't run often enough."
  • You must keep thinking about how to keep the tests running fast all the time; and refactoring when the tests start getting slow.
  • The unit tests must be able to be run by subsets, this in order to only run the set of tests that could be affected by the change performed for your feature/fix/refactor.

How to organize your unit tests

Here are a couple of hints to get a better suite of unit tests:

  • Unit tests should be created for all publicly exposed functions of a class:
  1. Free functions not declared as static.
  2. Basically, all public functions of the class could be tested, including public constructors and operators.
  • Unit tests should cover all main paths through functions, including different branches of conditionals, loops, etc.
  • Unit tests should handle both trivial, and edge/corner cases, providing wrong and/or random data, this in order to be able to test error handling of the system.
  • Test cases should be combined into test suites by some criteria, e.g.:
  1. Common functionality.
  2. Different use cases for same functions, common fixtures, etc.
  • Test cases should be very short and easy to understand, due to the unit tests frameworks syntax is pretty complicated, i.e.:
  1. The test case should test only one thing.
  2. The test case should be short.
  3. Each test should work independent on other tests. Broken test shouldn't prevent other tests from execution.
  4. Tests shouldn't be dependent on order of their execution.
  • Your unit tests should run fast, so it will possible to run it very often.

On the other hand, the test-ability of code also depends on its design. Sometimes it's very hard to write unit tests, because functionality to be tested is hidden behind many interfaces, or there are many dependencies, so it's hard to setup test correctly. There are some suggestions on how code should be written to allow easier writing of unit tests for it:

  • Code should be loosely coupled — class or function should have as few dependencies as possible.
  • Avoid creation of instances of complex classes inside your class. It's better to pass pointers/references to these classes to your class/function — this will allow to use mocking to test your code.
  • You should try to minimize public API that is provided by class — it's better to write several classes, that perform separate tasks, instead of creating one class, that does everything.

Mocks in unit testing

In a unit test, mock objects can simulate behavior of complex, real (non-mock) objects and they are very useful when a real object is impractical or impossible to incorporate into a unit test, as occurs in the world of embedded systems, since in many occasions it is basically impossible to simulate the behavior of physical devices. Therefore, if an object has any of the following characteristics, it may be useful to use a mock object instead:

  • Supplies non-deterministic results (e.g., current time or current temperature).
  • Has states that are difficult to create or reproduce (e.g. a network error).
  • Slow (e.g. a complete database, which would have to be initialized before the test).
  • Does not exist yet, or may change behavior.
  • would have to include information and methods exclusively for testing purposes (and not for its actual task).

Note: It is important to clarify that mock objects have the same interface as real objects they mimic, this allows to the client object to remain unaware of whether it's using a real object, or a mock object.

Mocking workflow

  • Create an interface for the class that you will test, so you can have a mocked class and real-world class.
  • Create a mocked class using some framework, this class must inherit from the interface previously created.
  • Identify the code that you want to test against the mocked object.
  • Create a test case, which will use your mocked object instead of the real-world one. Inside this test case you must do the following:
  1. Create an instance of the mocked class.
  2. Setup the behavior and expectations related to mocked object — what methods should be called (or not called), the data, which will be returned for a particular call, etc.
  3. Run your code.
  4. Verify the correct functionality of your tests.

TDD an alternative way to develop tests

Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle (see Figure 1):

  • First the developer writes a failing test case that defines a desired improvement or new function.
  • Then produces code to pass the test.
  • And finally refactors the new code to acceptable standards.

Test-driven development is related to the test-first programming concepts of extreme programming, and often linked to agile programming approach. In its pure form, TDD has benefits, but it also has drawbacks. But we can use some practices to improve quality of code in our projects. Personally, I really like the concept of have a test suite for each specific requirement/feature of the project.

File:Capture.png.png
Figure 1. TDD activity diagram.

Unit Tests Frameworks

Kind of unit testing frameworks

These are the more common/strong unit-testing frameworks:

  • GoogleTests
  • CppUnit
  • Boost.Test
  • CppUnitLite
  • NanoCppUnit
  • Unit++
  • CxxTest

The main question here is, How do we choose a unit-testing framework?, well, It depends on what we’re going to do with it and how we’re going to use it. A good way to start is to create a list of features that are important given the type of work I expect to be doing, like the next one:

  1. Minimal amount of work needed to add new tests.
  2. Easy to modify and port.
  3. Supports setup/teardown steps (fixtures).
  4. Handles exceptions and crashes well.
  5. Good assert functionality.
  6. Supports suites.
  7. Supports Mocking and/or have its own Mock framework.

Why Google tests?

googletest helps you write better C++ tests.

googletest is a testing framework developed by the Testing Technology team with Google's specific requirements and constraints in mind. Whether you work on Linux, Windows, or a Mac, if you write C++ code, googletest can help you. And it supports any kind of tests, not just unit tests.

So what makes a good test, and how does googletest fit in?:

  • Tests should be independent and repeatable. It's a pain to debug a test that succeeds or fails as a result of other tests. googletest isolates the tests by running each of them on a different object. When a test fails, googletest allows you to run it in isolation for quick debugging.
  • Tests should be well organized and reflect the structure of the tested code. googletest groups related tests into test suites that can share data and subroutines. This common pattern is easy to recognize and makes tests easy to maintain. Such consistency is especially helpful when people switch projects and start to work on a new code base.
  • Tests should be portable and reusable. Google has a lot of code that is platform-neutral; its tests should also be platform-neutral. googletest works on different OSes, with different compilers, with or without exceptions, so googletest tests can work with a variety of configurations.
  • When tests fail, they should provide as much information about the problem as possible. googletest doesn't stop at the first test failure. Instead, it only stops the current test and continues with the next. You can also set up tests that report non-fatal failures after which the current test continues. Thus, you can detect and fix multiple bugs in a single run-edit-compile cycle.
  • The testing framework should liberate test writers from housekeeping chores and let them focus on the test content. googletest automatically keeps track of all tests defined, and doesn't require the user to enumerate them in order to run them.
  • Tests should be fast. With googletest, you can reuse shared resources across tests and pay for the set-up/tear-down only once, without making tests depend on each other.

Going deeper into GoogleTest and GoogleMock

Google tests

Installing

First of all we need to install the gtest development package (steps for Ubuntu 16.04):

sudo apt-get install libgtest-dev
sudo apt-get install cmake # install cmake (just in case you don't have it)
cd /usr/src/gtest
sudo cmake CMakeLists.txt
sudo make
sudo cp *.a /usr/lib
sudo ln -s /usr/lib/libgtest.a /usr/local/lib/gtest/libgtest.a
sudo ln -s /usr/lib/libgtest_main.a /usr/local/lib/gtest/libgtest_main.a

Assertions

googletest assertions are macros that resemble function calls. You test a class or function by making assertions about its behavior. When an assertion fails, googletest prints the assertion's source file and line number location, along with a failure message. You may also supply a custom failure message which will be appended to googletest's message.

The assertions come in pairs that test the same thing but have different effects on the current function:

  • ASSERT_* versions generate fatal failures when they fail, and abort the current function.
  • EXPECT_* versions generate nonfatal failures, which don't abort the current function.

Note: Usually EXPECT_* are preferred, as they allow more than one failure to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.

Basic Assertions

These assertions do basic true/false condition testing:

Fatal assertion Nonfatal assertion Verifies
ASSERT_TRUE(condition); EXPECT_TRUE(condition); condition is true
ASSERT_FALSE(condition); EXPECT_FALSE(condition); condition is false
Binary Comparison Assertions

This section describes assertions that compare two values:

Fatal assertion Nonfatal assertion Verifies
ASSERT_EQ(val1, val2); EXPECT_EQ(val1, val2); val1 == val2
ASSERT_NE(val1, val2); EXPECT_NE(val1, val2); val1 != val2
ASSERT_LT(val1, val2); EXPECT_LT(val1, val2); val1 < val2
ASSERT_LE(val1, val2); EXPECT_LE(val1, val2); val1 <= val2
ASSERT_GT(val1, val2); EXPECT_GT(val1, val2); val1 > val2
ASSERT_GE(val1, val2); EXPECT_GE(val1, val2) val1 >= val2
String Comparison Assertions

The assertions in this group compare two C strings. If you want to compare two string objects, use EXPECT_EQ, EXPECT_NE, and etc instead:

Fatal assertion Nonfatal assertion Verifies
ASSERT_STREQ(str1,str2); EXPECT_STREQ(str1,str2); the two C strings have the same content
ASSERT_STRNE(str1,str2); EXPECT_STRNE(str1,str2); the two C strings have different contents
ASSERT_STRCASEEQ(str1,str2); EXPECT_STRCASEEQ(str1,str2); the two C strings have the same content, ignoring case
ASSERT_STRCASENE(str1,str2); EXPECT_STRCASENE(str1,str2); the two C strings have different contents, ignoring case
  • Note: "CASE" in an assertion name means that case is ignored. A NULL pointer and an empty string are considered different. *STREQ* and *STRNE* also accept wide C strings (wchar_t*). If a comparison of two wide strings fails, their values will be printed as UTF-8 narrow strings.
Exception Assertions

These are for verifying that a piece of code throws (or does not throw) an exception of the given type:

Fatal assertion Nonfatal assertion Verifies
ASSERT_THROW(statement, exception_type); EXPECT_THROW(statement, exception_type); statement throws an exception of the given type
ASSERT_ANY_THROW(statement); EXPECT_ANY_THROW(statement); statement throws an exception of any type
ASSERT_NO_THROW(statement); EXPECT_NO_THROW(statement); statement doesn't throw any exception
  • Note: requires exceptions to be enabled in the build environment
Floating-Point Comparison Assertions

Comparing floating-point numbers is tricky. Due to round-off errors, it is very unlikely that two floating-points will match exactly. Therefore, ASSERT_EQ 's naive comparison usually doesn't work. And since floating-points can have a wide value range, no single fixed error bound works. It's better to compare by a fixed relative error bound, except for values close to 0 due to the loss of precision there.

In general, for floating-point comparison to make sense, the user needs to carefully choose the error bound. If they don't want or care to, comparing in terms of Units in the Last Place (ULPs) is a good default, and googletest provides assertions to do this:

Fatal assertion HNonfatal assertion Verifies
ASSERT_FLOAT_EQ(val1, val2); EXPECT_FLOAT_EQ(val1, val2); the two float values are almost equal
ASSERT_DOUBLE_EQ(val1, val2); EXPECT_DOUBLE_EQ(val1, val2); the two double values are almost equal
  • Note: By "almost equal" we mean the values are within 4 ULP's from each other.

The following assertions allow you to choose the acceptable error bound:

Fatal assertion HNonfatal assertion Verifies
ASSERT_NEAR(val1, val2, abs_error); EXPECT_NEAR(val1, val2, abs_error); the difference between val1 and val2 doesn't exceed the given absolute error

How to Develop Unit Tests

There are two different ways to write unit tests, both are presented below:

Simple Tests

To create a test:

  • Use the TEST() macro to define and name a test function. These are ordinary C++ functions that don't return a value.
TEST(TestSuiteName, TestName)
{
   ... test body ...
}
  • In this function, along with any valid C++ statements you want to include, use the various googletest assertions to check values.
  • The test's result is determined by the assertions; if any assertion in the test fails (either fatally or non-fatally), or if the test crashes, the entire test fails. Otherwise, it succeeds.

For example, let's take a simple integer function defined in myClass.hpp:

 int Factorial(int n);  // Returns the factorial of n

A test suite for this function might look like, myTestFile_test.cpp:

// myTestFile_test.cpp

#include "gtest/gtest.h"

// Tests factorial of 0.
TEST(FactorialTest, HandlesZeroInput)
{
  myClass objExample;
  EXPECT_EQ(objExample.Factorial(0), 1);
}

// Tests factorial of positive numbers.
TEST(FactorialTest, HandlesPositiveInput)
{
  myClass objExample;
  EXPECT_EQ(objExample.Factorial(1), 1);
  EXPECT_EQ(objExample.Factorial(2), 2);
  EXPECT_EQ(objExample.Factorial(3), 6);
  EXPECT_EQ(objExample.Factorial(8), 40320);
}

Test Fixtures

Personally, this is my favorite way to implement unit tests, due to it is really clean and enables to use Mocks.

To create a fixture:

  • Derive a class from ::testing::Test:
class myTest : public ::testing::Test
{
public:
  myTest()
  {
     // Empty
  }

  ~myTest()
  {
     // Empty
  }

protected:
 .....
}
  • Start its body with protected:, as we'll want to access fixture members from sub-classes.
  • Inside the class, declare any objects you plan to use.
  • If necessary, write a default constructor or SetUp() function to prepare the objects for each test. A common mistake is to spell SetUp() as Setup() with a small u - Use override in C++11 to make sure you spelled it correctly.
  • If necessary, write a destructor or TearDown() function to release any resources you allocated in SetUp() . To learn when you should use the constructor/destructor and when you should use SetUp()/TearDown(), read the FAQ.
  • If needed, define subroutines for your tests to share.
  • When using a fixture, use TEST_F() instead of TEST() as it allows you to access objects and subroutines in the test fixture:
TEST_F(TestFixtureName, TestName)
{
  ... test body ...
}

For example, let's take a simple integer function defined in the next class (what2Test.hpp):

// what2Test.hpp

class what2Test
{
public:
  what2Test()
  {
    //Empty
  }
  
  ~what2Test()
  {
    //Empty
  }
  
  int retrieveValueProvided(const int a)
  {
    return a;
  }
};

A test suite for this class might look like, what2Test_test.cpp:

// what2Test_test.cpp

#include <gtest/gtest.h>
#include "include/what2Test.hpp"

class what2Test_test : public ::testing::Test
{
public:
 what2Test_test()
    : _pWhat2Test(new what2Test())
 {
     //Empty
 }

 ~what2Test_test()
 {
    delete _pWhat2Test;
 }

protected:
   what2Test *_pWhat2Test;
};

TEST_F(what2Test_test, success)
{
   ASSERT_EQ(2, _pWhat2Test->retrieveValueProvided(2));
}

TEST_F(what2Test_test, failed)
{
   ASSERT_NE(3, _pWhat2Test->retrieveValueProvided(2));
}

Temporarily Disabling Tests

If you have a broken test that you cannot fix right away, you can add the DISABLED_ prefix to its name. This will exclude it from execution. This is better than commenting out the code or using #if 0, as disabled tests are still compiled (and thus won't rot).

If you need to disable all tests in a test suite, you can either add DISABLED_ to the front of the name of each test, or alternatively add it to the front of the test suite name, as follows (based on the previous example):

  • Disabling all the test suite:
class DISABLED_what2Test_test : public ::testing::Test
{
 ....
};
  • Disabling specific tests:
// Tests that what2Test_test does success.
TEST_F(what2Test_test, DISABLED_success)
{
   ...
}

// Tests that what2Test_test does failed.
TEST_F(DISABLED_what2Test_test, failed)
{
   ...
}

Writing the main() Function

The preferable way to do this is creating a separate file for it as follows (main.cpp):

// main.cpp
#include <gtest/gtest.h>

int main(int argc, char **argv)
{
   ::testing::InitGoogleTest(&argc, argv);
   return RUN_ALL_TESTS();
}

Google Mock

Installing

First of all we need to install the gmock development package (steps for Ubuntu 16.04):

sudo apt-get install google-mock
sudo apt-get install cmake # install cmake (just in case you don't have it)
cd /usr/src/gmock
sudo cmake CMakeLists.txt
sudo make
sudo cp *.a /usr/lib
sudo ln -s /usr/lib/libgmock.a /usr/local/lib/gmock/libgmock.a
sudo ln -s /usr/lib/libgmock_main.a /usr/local/lib/gmock/libgmock_main.a

MOCK_METHODn Macros

Before the generic MOCK_METHOD macro was introduced, mocks where created using a family of macros collectively called MOCK_METHODn. These macros are still supported, though migration to the new MOCK_METHOD is recommended.

The macros in the MOCK_METHODn family differ from MOCK_METHOD:

  • The general structure is MOCK_METHODn(MethodName, ReturnType(Args)), instead of MOCK_METHOD(ReturnType, MethodName, (Args)).
  • The number n must equal the number of arguments.
  • When mocking a const method, one must use MOCK_CONST_METHODn.
  • When mocking a class template, the macro name must be suffixed with _T.
  • In order to specify the call type, the macro name must be suffixed with _WITH_CALLTYPE, and the call type is the first macro argument.

Old macros and their new equivalents (n=1):

File:Mock-method-table.png
Figure 2. Old macros and their new equivalents.

Setting Expectations

Knowing When to Expect

There are basically two constructs for defining the behavior of a mock object: ON_CALL and EXPECT_CALL. The difference? ON_CALL defines what happens when a mock method is called, but doesn't imply any expectation on the method being called. EXPECT_CALL not only defines the behavior, but also sets an expectation that the method will be called with the given arguments, for the given number of times (and in the given order when you specify the order too).

As advice use ON_CALL by default, and only use EXPECT_CALL when you actually intend to verify that the call is made. For example, you may have a bunch of ON_CALLs in your test fixture to set the common mock behavior shared by all tests in the same group, and write (scarcely) different EXPECT_CALLs in different TEST_Fs to verify different aspects of the code's behavior.


Setting Default Actions ON_CALL()

To customize the default action for a particular method of a specific mock object, use ON_CALL(). ON_CALL() has a similar syntax to EXPECT_CALL(), but it is used for setting default behaviors (when you do not require that the mock method is called):

ON_CALL(mock-object, method(matchers))
    .With(multi-argument-matcher)   ?
    .WillByDefault(action);

Note: '?' means it can be used at most once.


Setting Expectations EXPECT_CALL()

EXPECT_CALL() sets expectations on a mock method (How will it be called? What will it do?):

EXPECT_CALL(mock-object, method (matchers)?)
     .With(multi-argument-matcher)  ?
     .Times(cardinality)            ?
     .InSequence(sequences)         *
     .After(expectations)           *
     .WillOnce(action)              *
     .WillRepeatedly(action)        ?
     .RetiresOnSaturation();        ?
  • For each item above, '?' means it can be used at most once, while '*' means it can be used any number of times.
  • In order to pass, EXPECT_CALL must be used before the calls are actually made.
  • The (matchers) is a comma-separated list of matchers that correspond to each of the arguments of method, and sets the expectation only for calls of method that matches all of the matchers.
  • If (matchers) is omitted, the expectation is the same as if the matchers were set to anything matchers (for example, (_, _, _, _) for a four-arg method).
  • If Times() is omitted, the cardinality is assumed to be:
    • Times(1) when there is neither WillOnce() nor WillRepeatedly();
    • Times(n) when there are n WillOnce()s but no WillRepeatedly(), where n >= 1; or
    • Times(AtLeast(n)) when there are n WillOnce()s and a WillRepeatedly(), where n >= 0.
  • Cardinates are used in Times() to specify how many times a mock function will be called:
AnyNumber() The function can be called any number of times.
AtLeast(n) The call is expected at least n times.
AtMost(n) The call is expected at most n times.
Between(m, n) The call is expected between m and n (inclusive) times.
Exactly(n) or n The call is expected exactly n times. In particular, the call should never happen when n is 0.

Note: A method with no EXPECT_CALL() is free to be invoked any number of times, and the default action will be taken each time.


Actions

Actions specify what a mock function should do when invoked, here are the possible actions that could be used when a mock is invoked:

  • Returning a Value:
Return() Return from a void mock function.
Return(value) Return value. If the type of value is different to the mock function's return type, value is converted to the latter type at the time the expectation is set, not when the action is executed.
ReturnArg<N>() Return the N-th (0-based) argument.
ReturnNew<T>(a1, ..., ak) Return new T(a1, ..., ak); a different object is created each time.
ReturnNull() Return a null pointer.
ReturnPointee(ptr) Return the value pointed to by ptr.
ReturnRef(variable) Return a reference to variable.
ReturnRefOfCopy(value) Return a reference to a copy of value; the copy lives as long as the action.
ReturnRoundRobin({a1, ..., ak}) Each call will return the next ai in the list, starting at the beginning when the end of the list is reached.
  • Side Effects:
Assign(&variable, value) Assign value to variable.
DeleteArg<N>() Delete the N-th (0-based) argument, which must be a pointer.
SaveArg<N>(pointer) Save the N-th (0-based) argument to *pointer.
SaveArgPointee<N>(pointer) Save the value pointe-based) argumentd to by the N-th (0-based) argument to *pointer.
SetArgReferee<N>(value) Assign value to the variable referenced by the N-th (0-based).
SetArgPointee<N>(value) Assign value to the variable pointed by the N-th (0-based) argument.
SetArgumentPointee<N>(value) Same as SetArgPointee<N>(value). Deprecated. Will be removed in v1.7.0.
SetArrayArgument<N>(first, last) Copies the elements in source range [first, last) to the array pointed to by the N-th (0-based) argument, which can be either a pointer or an iterator. The action does not take ownership of the elements in the source range.
SetErrnoAndReturn(error, value) Set errno to error and return value.
Throw(exception) Throws the given exception, which can be any copyable value. Available since v1.1.0.
  • Using a Function, Functor, or Lambda as an Action:

Note: In the following, by "callable" we mean a free function, std::function, functor, or lambda.

f Invoke f with the arguments passed to the mock function, where f is a callable.
Invoke(f) Invoke f with the arguments passed to the mock function, where f can be a global/static function or a functor.
Invoke(object_pointer, &class::method) Invoke the method on the object with the arguments passed to the mock function.
InvokeWithoutArgs(f) Invoke f, which can be a global/static function or a functor. f must take no arguments.
InvokeWithoutArgs(object_pointer, &class::method) Invoke the method on the object, which takes no arguments.
InvokeArgument<N>(arg1, arg2, ..., argk) Invoke the mock function's N-th (0-based) argument, which must be a function or a functor, with the k arguments.


Creating Mock Classes

Using the what2Test.hpp file as example, here are the simple steps you need to follow:

1) First of all, you need to create an interface to the implementation class and then to derive the class from it. This also helps you to keep a polymorphic design in your project.

// Iwhat2Test.h
...
namespace interfaces
{
struct Iwhat2Test
{
   virtual ~Iwhat2Test() {}
   virtual int retrieveValueProvided(const int a) = 0;
};
}
// what2Test.hpp
...
#include "interfaces/Iwhat2Test.h"
...
class what2Test : public interfaces::Iwhat2Test
{
 ...

 int retrieveValueProvided(const int a) override
 {
   return a;
 }

 ...
};

2) Derive a Mock class from the interface, so, your file MockWhat2Test.h is going to content the MOCK_METHODn form of every virtual method described in the interface (take a look to MOCK_METHODn Macros section to create your mock class):

// MockWhat2Test.h
#include <gmock/gmock.h>
#include "include/interfaces/Iwhat2Test.h"

class MockWhat2Test : public interfaces::Iwhat2Test
{
public:
  MOCK_METHOD1(retrieveValueProvided, int(const int a));
};

Using Mocks in Tests

  • Once the mock class is already created, you could use in the same test suite of the implementation class; that usage mainly applies when the class to be tested needs physical resources, or when the handled resource by the class is too heavy to add in the unit test environment.
  • In order to create a mock instance in your test class you should use:
::testing::NiceMock<>

For example, let's use the MockWhat2Test class in what2Test_test.cpp:

// what2Test_test.cpp

#include <gtest/gtest.h>
#include "include/what2Test.hpp"
#include "include/mock/MockWhat2Test.h"

class what2Test_test : public ::testing::Test
{
public:
 what2Test_test()
    : _pWhat2Test(new what2Test())
    , _pMockWhat2Test(new ::testing::NiceMock<MockWhat2Test>)
 {
     //Empty
 }

 ~what2Test_test()
 {
    delete _pWhat2Test;
    delete _pMockWhat2Test;
 }

protected:
   what2Test *_pWhat2Test;
   MockWhat2Test *_pMockWhat2Test;
};

TEST_F(what2Test_test, success)
{
   // set default result via ON_CALL, when _pMockWhat2Test be invoked
   ON_CALL(*_pMockWhat2Test, retrieveValueProvided(2)).WillByDefault(testing::Return(2));

   // mock object execution, this invokes _pMockWhat2Test
   ASSERT_EQ(2, _pMockWhat2Test->retrieveValueProvided(2));

   // real object execution, this uses the real instance of what2Test class
   ASSERT_EQ(2, _pWhat2Test->retrieveValueProvided(2));
}

TEST_F(what2Test_test, failed)
{
   // set default result via ON_CALL, when _pMockWhat2Test be invoked
   ON_CALL(*_pMockWhat2Test, retrieveValueProvided(2)).WillByDefault(testing::Return(3));

   // mock object execution, this invokes _pMockWhat2Test
   ASSERT_NE(2, _pMockWhat2Test->retrieveValueProvided(2));

   // real object execution, this uses the real instance of what2Test class
   ASSERT_NE(3, _pWhat2Test->retrieveValueProvided(2));
}
  • On the other hand, if the implementation class, to which you are about to implement unit tests, needs other object to properly work, it is important to inject it on the implementation class, or to have injected an object, which returns it via some method; whatever of the ways you use, every external object must has its proper mock class.

Let's see the next example, where what2Test class is used in Foo class:

// Foo.hpp
#include "include/interfaces/Iwhat2Test.h"

class Foo
{
public:
  Foo(interfaces::Iwhat2Test *pWhat2Test)
    : _pWhat2Test(pWhat2Test)
  {
    //Empty
  }
  
  ~Foo()
  {
    //Empty
  }
  
  bool isFoo(const int foo)
  {
    if(!_pWhat2Test)
    {
      return false;
    }

    return (foo == _pWhat2Test->retrieveValueProvided(foo));
  }

private:
  interfaces::Iwhat2Test *_pWhat2Test;
};


Then Foo_test class should look as follows:

// Foo_test.cpp
#include <gtest/gtest.h>
#include "include/Foo.hpp"
#include "include/mock/MockWhat2Test.h"

class Foo_test : public ::testing::Test
{
public:
 Foo_test()
    : _pMockWhat2Test(new ::testing::NiceMock<MockWhat2Test>)
    , _pFoo(new Foo(_pMockWhat2Test))
 {
     //Empty
 }

 ~Foo_test()
 {
    delete _pMockWhat2Test;
    delete _pFoo;
 }

protected:
   MockWhat2Test *_pMockWhat2Test;
   Foo *_pFoo;
   int _foo = 1;
};

TEST_F(Foo_test, success)
{
   // set default result via ON_CALL, when _pMockWhat2Test be invoked
   ON_CALL(*_pMockWhat2Test, retrieveValueProvided(::testing::_)).WillByDefault(testing::Return(_foo));

   // real object execution, this uses the real instance of what2Test class
   EXPECT_TRUE(_pFoo->isFoo(_foo));
}

TEST_F(Foo_test, failed)
{
   // set default result via ON_CALL, when _pMockWhat2Test be invoked
   ON_CALL(*_pMockWhat2Test, retrieveValueProvided(::testing::_)).WillByDefault(testing::Return(0));

   // real object execution, this uses the real instance of what2Test class
   EXPECT_FALSE(_pFoo->isFoo(_foo));
}

How to debug your unit tests

This is a really important step, does not matter if your tests threw "PASSED", you must debug every section of the test suite implemented, mostly when you used mocks to mimic objects, since sometimes could get the maravelous "PASSED" result, but maybe it was a coincidence and the mocks are not being properly used, and you will know it in several weeks when you perform a little change in the implementation class.

You can use the debugging tool that you prefer, I recommend you to use gdb.

Debugging a binary test file with gdb

1) Launch the binary by using gdb:

gdb /path/to/binary/dummy_app.test

2) Use gdb commands to stop the test in a specific place and then corroborate the assignation of variables values during testing execution:

// breakpoint in what2Test::retrieveValueProvided(const int a) function
break /path/to/what2Test.hpp:15
printf("value to retrieve: %d\n",a)