[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
What you’ve seen so far is all you need for basic unit testing. The features described in this section are additions to Check that make it easier for the developer to write, run, and analyze tests.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Using the ck_assert
function for all tests can lead to lot of
repetitive code that is hard to read. For your convenience Check
provides a set of functions (actually macros) for testing often used
conditions.
The typical size of an assertion message is less than 80 bytes.
However, some of the functions listed below can generate very large messages
(up to 4GB allocations were seen in the wild).
To prevent this, a limit is placed on the assertion message size.
This limit is 4K bytes by default.
It can be modified by setting the CK_MAX_MSG_SIZE
environment variable,
or, if it is not set, by invoking the check_set_max_msg_size()
function.
If used, this function must be called, once, before the first assertion.
ck_abort
Unconditionally fails test with default message.
ck_abort_msg
Unconditionally fails test with user supplied message.
ck_assert
Fails test if supplied condition evaluates to false.
ck_assert_msg
Fails test if supplied condition evaluates to false and displays user provided message.
ck_assert_int_eq
ck_assert_int_ne
ck_assert_int_lt
ck_assert_int_le
ck_assert_int_gt
ck_assert_int_ge
Compares two signed integer values (intmax_t
) and displays a predefined
message with both the condition and input parameters on failure. The
operator used for comparison is different for each function and is indicated
by the last two letters of the function name. The abbreviations eq
,
ne
, lt
, le
, gt
, and ge
correspond to
==
, !=
, <
, <=
, >
, and >=
respectively.
ck_assert_uint_eq
ck_assert_uint_ne
ck_assert_uint_lt
ck_assert_uint_le
ck_assert_uint_gt
ck_assert_uint_ge
Similar to ck_assert_int_*
, but compares two unsigned integer values
(uintmax_t
) instead.
ck_assert_float_eq
ck_assert_float_ne
ck_assert_float_lt
ck_assert_float_le
ck_assert_float_gt
ck_assert_float_ge
Compares two floating point numbers (float
) and displays a predefined
message with both the condition and input parameters on failure.
The operator used for comparison is different for each function
and is indicated by the last two letters of the function name.
The abbreviations eq
, ne
, lt
, le
, gt
,
and ge
correspond to ==
, !=
, <
, <=
, >
,
and >=
respectively.
Beware using those operators for floating point numbers because of precision
possible loss on every operation on floating point numbers. For example
(1/3)*3==1 would return false, because 1/3==1.333... (or 1.(3) notation
in Europe) and cannot be represented by computer logic. As another example
1.1f in fact could be 1.10000002384185791015625 and 2.1f could be
2.099999904632568359375 because of binary representation of floating
point numbers.
If you have different mathematical operations used on floating point numbers
consider using precision comparisons or integer numbers instead. But in some
cases those operators could be used. For example if you cyclically increment
your floating point number only by positive or only by negative values than
you may use <
, <=
, >
and >=
operators in tests.
If your computations must end up with a certain value than ==
and
!=
operators may be used.
ck_assert_double_eq
ck_assert_double_ne
ck_assert_double_lt
ck_assert_double_le
ck_assert_double_gt
ck_assert_double_ge
Similar to ck_assert_float_*
, but compares two double precision
floating point values (double
) instead.
ck_assert_ldouble_eq
ck_assert_ldouble_ne
ck_assert_ldouble_lt
ck_assert_ldouble_le
ck_assert_ldouble_gt
ck_assert_ldouble_ge
Similar to ck_assert_float_*
, but compares two double precision
floating point values (long double
) instead.
ck_assert_float_eq_tol
ck_assert_float_ne_tol
ck_assert_float_le_tol
ck_assert_float_ge_tol
Compares two floating point numbers (float
) with specified user tolerance
set by the third parameter (float
) and displays a predefined message
with both the condition and input parameters on failure.
The abbreviations eq
, ne
, le
, and ge
correspond
to ==
, !=
, <=
, and >=
respectively with acceptable
error (tolerance) specified by the last parameter.
Beware using those functions for floating comparisons because of
(1) errors coming from floating point number representation,
(2) rounding errors,
(3) floating point errors are platform dependent.
Floating point numbers are often internally represented in binary
so they cannot be exact power of 10. All these operators have significant
error in comparisons so use them only if you know what you’re doing.
Some assertions could fail on one platform and would be passed on another.
For example expression 0.02<=0.01+10^-2
is true by meaning,
but some platforms may calculate it as false. IEEE 754 standard specifies
the floating point number format representation but it does not promise that
the same computation carried out on all hardware will produce the same result.
ck_assert_double_eq_tol
ck_assert_double_ne_tol
ck_assert_double_le_tol
ck_assert_double_ge_tol
Similar to ck_assert_float_*_tol
, but compares two double precision
floating point values (double
) instead.
ck_assert_ldouble_eq_tol
ck_assert_ldouble_ne_tol
ck_assert_ldouble_le_tol
ck_assert_ldouble_ge_tol
Similar to ck_assert_float_*_tol
, but compares two double precision
floating point values (long double
) instead.
ck_assert_float_finite
Checks that a floating point number (float
) is finite and displays
a predefined message with both the condition and input parameter on failure.
Finite means that value cannot be positive infinity, negative infinity
or NaN ("Not a Number").
ck_assert_double_finite
Similar to ck_assert_float_finite
, but checks double precision
floating point value (double
) instead.
ck_assert_ldouble_finite
Similar to ck_assert_float_finite
, but checks double precision
floating point value (long double
) instead.
ck_assert_float_infinite
Checks that a floating point number (float
) is infinite and displays
a predefined message with both the condition and input parameter on failure.
Infinite means that value may only be positive infinity or negative infinity.
ck_assert_double_infinite
Similar to ck_assert_float_infinite
, but checks double precision
floating point value (double
) instead.
ck_assert_ldouble_infinite
Similar to ck_assert_float_infinite
, but checks double precision
floating point value (long double
) instead.
ck_assert_float_nan
Checks that a floating point number (float
, double
or
long double
abbreviated as ldouble
) is NaN ("Not a Number")
and displays a predefined message with both the condition and input parameter
on failure.
ck_assert_double_nan
Similar to ck_assert_float_nan
, but checks double precision
floating point value (double
) instead.
ck_assert_ldouble_nan
Similar to ck_assert_float_nan
, but checks double precision
floating point value (long double
) instead.
ck_assert_float_nonnan
Checks that a floating point number (float
) is not NaN ("Not a Number")
and displays a predefined message with both the condition and input parameter
on failure.
ck_assert_double_nonnan
Similar to ck_assert_float_nonnan
, but checks double precision
floating point value (double
) instead.
ck_assert_ldouble_nonnan
Similar to ck_assert_float_nonnan
, but checks double precision
floating point value (long double
) instead.
ck_assert_str_eq
ck_assert_str_ne
ck_assert_str_lt
ck_assert_str_le
ck_assert_str_gt
ck_assert_str_ge
Compares two null-terminated char *
string values, using the
strcmp()
function internally, and displays predefined message
with condition and input parameter values on failure. The comparison
operator is again indicated by last two letters of the function name.
ck_assert_str_lt(a, b)
will pass if the unsigned numerical value
of the character string a
is less than that of b
.
If a NULL pointer is be passed to any comparison macro the check will fail.
ck_assert_pstr_eq
ck_assert_pstr_ne
Similar to ck_assert_str_*
macros, but able to check undefined strings.
If a NULL pointer would be passed to a comparison macro it would mean that
a string is undefined. If both strings are undefined ck_assert_pstr_eq
would pass, but ck_assert_pstr_ne
would fail. If only one of strings is
undefined ck_assert_pstr_eq
macro would fail and ck_assert_pstr_ne
would pass.
ck_assert_ptr_eq
ck_assert_ptr_ne
Compares two pointers and displays predefined message with
condition and values of both input parameters on failure. The operator
used for comparison is different for each function and is indicated by
the last two letters of the function name. The abbreviations eq
and
ne
correspond to ==
and !=
respectively.
ck_assert_ptr_null
ck_assert_ptr_nonnull
Compares a pointers against null and displays predefined message with
condition and value of the input parameter on failure.
ck_assert_ptr_null
checks that pointer is equal to NULL and
ck_assert_ptr_nonnull
checks that pointer is not equal to NULL.
ck_assert_ptr_nonnull
is highly recommended to use in situations
when a function call can return NULL as error indication (like functions
that use malloc, calloc, strdup, mmap, etc).
ck_assert_mem_eq
ck_assert_mem_ne
ck_assert_mem_lt
ck_assert_mem_le
ck_assert_mem_gt
ck_assert_mem_ge
Compares contents of two memory locations of the given length, using the
memcmp()
function internally, and displays predefined message
with condition and input parameter values on failure. The comparison
operator is again indicated by last two letters of the function name.
ck_assert_mem_lt(a, b)
will pass if the unsigned numerical value
of memory location a
is less than that of b
.
fail
(Deprecated) Unconditionally fails test with user supplied message.
fail_if
(Deprecated) Fails test if supplied condition evaluates to true and displays user provided message.
fail_unless
(Deprecated) Fails test if supplied condition evaluates to false and displays user provided message.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
What happens if we pass -1
as the amount
in
money_create()
? What should happen? Let’s write a unit test.
Since we are now testing limits, we should also test what happens when
we create Money
where amount == 0
. Let’s put these in a
separate test case called “Limits” so that money_suite
is
changed like so:
--- tests/check_money.3.c 2020-06-21 08:52:50.000000000 -0700 +++ tests/check_money.6.c 2020-06-21 08:52:50.000000000 -0700 @@ -1,65 +1,94 @@ /* * Check: a unit test framework for C * Copyright (C) 2001, 2002 Arien Malec * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, * MA 02110-1301, USA. */ #include <stdlib.h> #include <check.h> #include "../src/money.h" START_TEST(test_money_create) { Money *m; m = money_create(5, "USD"); ck_assert_int_eq(money_amount(m), 5); ck_assert_str_eq(money_currency(m), "USD"); money_free(m); } END_TEST +START_TEST(test_money_create_neg) +{ + Money *m = money_create(-1, "USD"); + + ck_assert_msg(m == NULL, + "NULL should be returned on attempt to create with " + "a negative amount"); +} +END_TEST + +START_TEST(test_money_create_zero) +{ + Money *m = money_create(0, "USD"); + + if (money_amount(m) != 0) + { + ck_abort_msg("Zero is a valid amount of money"); + } +} +END_TEST + Suite * money_suite(void) { Suite *s; TCase *tc_core; + TCase *tc_limits; s = suite_create("Money"); /* Core test case */ tc_core = tcase_create("Core"); tcase_add_test(tc_core, test_money_create); suite_add_tcase(s, tc_core); + /* Limits test case */ + tc_limits = tcase_create("Limits"); + + tcase_add_test(tc_limits, test_money_create_neg); + tcase_add_test(tc_limits, test_money_create_zero); + suite_add_tcase(s, tc_limits); + return s; } int main(void) { int number_failed; Suite *s; SRunner *sr; s = money_suite(); sr = srunner_create(s); srunner_run_all(sr, CK_NORMAL); number_failed = srunner_ntests_failed(sr); srunner_free(sr); return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; } |
Now we can rerun our suite, and fix the problem(s). Note that errors in the “Core” test case will be reported as “Core”, and errors in the “Limits” test case will be reported as “Limits”, giving you additional information about where things broke.
--- src/money.5.c 2020-06-21 08:52:50.000000000 -0700 +++ src/money.6.c 2020-06-21 08:52:50.000000000 -0700 @@ -1,58 +1,65 @@ /* * Check: a unit test framework for C * Copyright (C) 2001, 2002 Arien Malec * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, * MA 02110-1301, USA. */ #include <stdlib.h> #include "money.h" struct Money { int amount; char *currency; }; Money *money_create(int amount, char *currency) { - Money *m = malloc(sizeof(Money)); + Money *m; + + if (amount < 0) + { + return NULL; + } + + m = malloc(sizeof(Money)); if (m == NULL) { return NULL; } m->amount = amount; m->currency = currency; return m; } int money_amount(Money * m) { return m->amount; } char *money_currency(Money * m) { return m->currency; } void money_free(Money * m) { free(m); return; } |
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Check normally forks to create a separate address space. This allows
a signal or early exit to be caught and reported, rather than taking
down the entire test program, and is normally very useful. However,
when you are trying to debug why the segmentation fault or other
program error occurred, forking makes it difficult to use debugging
tools. To define fork mode for an SRunner
object, you can do
one of the following:
void srunner_set_fork_status (SRunner * sr, enum fork_status fstat);
The enum fork_status
allows the fstat
parameter to
assume the following values: CK_FORK
and CK_NOFORK
. An
explicit call to srunner_set_fork_status()
overrides the
CK_FORK
environment variable.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
We may want multiple tests that all use the same Money. In such cases, rather than setting up and tearing down objects for each unit test, it may be convenient to add some setup that is constant across all the tests in a test case. Each such setup/teardown pair is called a test fixture in test-driven development jargon.
A fixture is created by defining a setup and/or a teardown function, and associating it with a test case. There are two kinds of test fixtures in Check: checked and unchecked fixtures. These are defined as follows:
are run inside the address space created by the fork to create the
unit test. Before each unit test in a test case, the setup()
function is run, if defined. After each unit test, the
teardown()
function is run, if defined. Since they run inside
the forked address space, if checked fixtures signal or otherwise
fail, they will be caught and reported by the SRunner
. A
checked teardown()
fixture will not run if the unit test
fails.
are run in the same address space as the test program. Therefore they
may not signal or exit, but may use the fail functions. The unchecked
setup()
, if defined, is run before the test case is
started. The unchecked teardown()
, if defined, is run after the
test case is done. An unchecked teardown()
fixture will run even
if a unit test fails.
An important difference is that the checked fixtures are run once per
unit test and the unchecked fixtures are run once per test case.
So for a test case that contains check_one()
and
check_two()
unit tests,
checked_setup()
/checked_teardown()
checked fixtures, and
unchecked_setup()
/unchecked_teardown()
unchecked
fixtures, the control flow would be:
unchecked_setup(); fork(); checked_setup(); check_one(); checked_teardown(); wait(); fork(); checked_setup(); check_two(); checked_teardown(); wait(); unchecked_teardown();
4.4.1 Test Fixture Examples | ||
4.4.2 Checked vs Unchecked Fixtures |
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
We create a test fixture in Check as follows:
void
and return void
.
In our example, we’ll make five_dollars
be a global created and
freed by setup()
and teardown()
respectively.
setup()
and teardown()
functions to the test
case with tcase_add_checked_fixture()
. In our example, this
belongs in the suite setup function money_suite
.
five_dollars
.
Note that the functions used for setup and teardown do not need to be
named setup()
and teardown()
, but they must take
void
and return void
. We’ll update ‘check_money.c’
with the following patch:
--- tests/check_money.6.c 2020-06-21 08:52:50.000000000 -0700 +++ tests/check_money.7.c 2020-06-21 08:52:50.000000000 -0700 @@ -1,94 +1,103 @@ /* * Check: a unit test framework for C * Copyright (C) 2001, 2002 Arien Malec * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, * MA 02110-1301, USA. */ #include <stdlib.h> #include <check.h> #include "../src/money.h" -START_TEST(test_money_create) +Money *five_dollars; + +void setup(void) +{ + five_dollars = money_create(5, "USD"); +} + +void teardown(void) { - Money *m; + money_free(five_dollars); +} - m = money_create(5, "USD"); - ck_assert_int_eq(money_amount(m), 5); - ck_assert_str_eq(money_currency(m), "USD"); - money_free(m); +START_TEST(test_money_create) +{ + ck_assert_int_eq(money_amount(five_dollars), 5); + ck_assert_str_eq(money_currency(five_dollars), "USD"); } END_TEST START_TEST(test_money_create_neg) { Money *m = money_create(-1, "USD"); ck_assert_msg(m == NULL, "NULL should be returned on attempt to create with " "a negative amount"); } END_TEST START_TEST(test_money_create_zero) { Money *m = money_create(0, "USD"); if (money_amount(m) != 0) { ck_abort_msg("Zero is a valid amount of money"); } } END_TEST Suite * money_suite(void) { Suite *s; TCase *tc_core; TCase *tc_limits; s = suite_create("Money"); /* Core test case */ tc_core = tcase_create("Core"); + tcase_add_checked_fixture(tc_core, setup, teardown); tcase_add_test(tc_core, test_money_create); suite_add_tcase(s, tc_core); /* Limits test case */ tc_limits = tcase_create("Limits"); tcase_add_test(tc_limits, test_money_create_neg); tcase_add_test(tc_limits, test_money_create_zero); suite_add_tcase(s, tc_limits); return s; } int main(void) { int number_failed; Suite *s; SRunner *sr; s = money_suite(); sr = srunner_create(s); srunner_run_all(sr, CK_NORMAL); number_failed = srunner_ntests_failed(sr); srunner_free(sr); return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; } |
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Checked fixtures run once for each unit test in a test case, and so
they should not be used for expensive setup. However, if a checked
fixture fails and CK_FORK
mode is being used, it will not bring
down the entire framework.
On the other hand, unchecked fixtures run once for an entire test case, as opposed to once per unit test, and so can be used for expensive setup. However, since they may take down the entire test program, they should only be used if they are known to be safe.
Additionally, the isolation of objects created by unchecked fixtures
is not guaranteed by CK_NOFORK
mode. Normally, in
CK_FORK
mode, unit tests may abuse the objects created in an
unchecked fixture with impunity, without affecting other unit tests in
the same test case, because the fork creates a separate address space.
However, in CK_NOFORK
mode, all tests live in the same address
space, and side effects in one test will affect the unchecked fixture
for the other tests.
A checked fixture will generally not be affected by unit test side
effects, since the setup()
is run before each unit test. There
is an exception for side effects to the total environment in which the
test program lives: for example, if the setup()
function
initializes a file that a unit test then changes, the combination of
the teardown()
function and setup()
function must be able
to restore the environment for the next unit test.
If the setup()
function in a fixture fails, in either checked
or unchecked fixtures, the unit tests for the test case, and the
teardown()
function for the fixture will not be run. A fixture
error will be created and reported to the SRunner
.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In a large program, it will be convenient to create multiple suites,
each testing a module of the program. While one can create several
test programs, each running one Suite
, it may be convenient to
create one main test program, and use it to run multiple suites. The
Check test suite provides an example of how to do this. The main
testing program is called check_check
, and has a header file
that declares suite creation functions for all the module tests:
Suite *make_sub_suite (void); Suite *make_sub2_suite (void); Suite *make_master_suite (void); Suite *make_list_suite (void); Suite *make_msg_suite (void); Suite *make_log_suite (void); Suite *make_limit_suite (void); Suite *make_fork_suite (void); Suite *make_fixture_suite (void); Suite *make_pack_suite (void);
The function srunner_add_suite()
is used to add additional
suites to an SRunner
. Here is the code that sets up and runs
the SRunner
in the main()
function in
‘check_check_main.c’:
SRunner *sr; sr = srunner_create (make_master_suite ()); srunner_add_suite (sr, make_list_suite ()); srunner_add_suite (sr, make_msg_suite ()); srunner_add_suite (sr, make_log_suite ()); srunner_add_suite (sr, make_limit_suite ()); srunner_add_suite (sr, make_fork_suite ()); srunner_add_suite (sr, make_fixture_suite ()); srunner_add_suite (sr, make_pack_suite ());
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
After adding a couple of suites and some test cases in each, it is sometimes practical to be able to run only one suite, or one specific test case, without recompiling the test code. Check provides two ways to accomplish this, either by specifying a suite or test case by name or by assigning tags to test cases and specifying one or more tags to run.
4.6.1 Selecting Tests by Suite or Test Case | ||
4.6.2 Selecting Tests Based on Arbitrary Tags |
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
There are two environment variables available that offer this
ability, CK_RUN_SUITE
and CK_RUN_CASE
. Just set the
value to the name of the suite and/or test case you want to run. These
environment variables can also be a good integration tool for running
specific tests from within another tool, e.g. an IDE.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It can be useful to dynamically include or exclude groups of tests to be run based on criteria other than the suite or test case name. For example, one or more tags can be assigned to test cases. The tags could indicate if a test runs for a long time, so such tests could be excluded in order to run quicker tests for a sanity check. Alternately, tags may be used to indicate which functional areas test cover. Tests can then be run that include all test cases for a given set of functional areas.
In Check, a tag is a string of characters without white space. One or
more tags can be assigned to a test case by using the
tcase_set_tags
function. This function accepts a string, and
multiple tags can be specified by delimiting them with spaces. For
example:
Suite *s; TCase *red, *blue, *purple, *yellow, *black; s = suite_create("Check Tag Filtering"); red = tcase_create("Red"); tcase_set_tags(red, "Red"); suite_add_tcase (s, red); tcase_add_test(red, red_test1); blue = tcase_create("Blue"); tcase_set_tags(blue, "Blue"); suite_add_tcase (s, blue); tcase_add_test(blue, blue_test1); purple = tcase_create("Purple"); tcase_set_tags(purple, "Red Blue"); suite_add_tcase (s, purple); tcase_add_test(purple, purple_test1);
Once test cases are tagged they may be selectively run in one of two ways:
a) Using Environment Variables
There are two environment variables available for selecting test cases
based on tags: CK_INCLUDE_TAGS
and
CK_EXCLUDE_TAGS
. These can be set to a space separated list of
tag names. If CK_INCLUDE_TAGS
is set then test cases which
include at least one tag in common with CK_INCLUDE_TAGS
will be
run. If CK_EXCLUDE_TAGS
is set then test cases with one tag in
common with CK_EXCLUDE_TAGS
will not be run. In cases where
both CK_INCLUDE_TAGS
and CK_EXCLUDE_TAGS
match a tag for
a test case the test will be excluded.
Both CK_INCLUDE_TAGS
and CK_EXCLUDE_TAGS
can be
specified in conjunction with CK_RUN_SUITE
or even
CK_RUN_CASE
in which case they will have the effect of further
narrowing the selection.
b) Programmatically
The srunner_run_tagged
function allows one to specify which
tags to run or exclude from a suite runner. This can be used to
programmatically control which test cases may run.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To enable testing of signal handling, there is a function
tcase_add_test_raise_signal()
which is used instead of
tcase_add_test()
. This function takes an additional signal
argument, specifying a signal that the test expects to receive. If no
signal is received this is logged as a failure. If a different signal
is received this is logged as an error.
The signal handling functionality only works in CK_FORK mode.
To enable testing of expected exits, there is a function
tcase_add_exit_test()
which is used instead of tcase_add_test()
.
This function takes an additional expected exit value argument,
specifying a value that the test is expected to exit with. If the test
exits with any other value this is logged as a failure. If the test exits
early this is logged as an error.
The exit handling functionality only works in CK_FORK mode.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Looping tests are tests that are called with a new context for each loop iteration. This makes them ideal for table based tests. If loops are used inside ordinary tests to test multiple values, only the first error will be shown before the test exits. However, looping tests allow for all errors to be shown at once, which can help out with debugging.
Adding a normal test with tcase_add_loop_test()
instead of
tcase_add_test()
will make the test function the body of a
for
loop, with the addition of a fork before each call. The
loop variable _i
is available for use inside the test function;
for example, it could serve as an index into a table. For failures,
the iteration which caused the failure is available in error messages
and logs.
Start and end values for the loop are supplied when adding the test.
The values are used as in a normal for
loop. Below is some
pseudo-code to show the concept:
for (_i = tfun->loop_start; _i < tfun->loop_end; _i++) { fork(); /* New context */ tfun->f(_i); /* Call test function */ wait(); /* Wait for child to terminate */ }
An example of looping test usage follows:
static const int primes[5] = {2,3,5,7,11}; START_TEST (check_is_prime) { ck_assert (is_prime (primes[_i])); } END_TEST ... tcase_add_loop_test (tcase, check_is_prime, 0, 5);
Looping tests work in CK_NOFORK
mode as well, but without the
forking. This means that only the first error will be shown.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
To be certain that a test won’t hang indefinitely, all tests are run with a timeout, the default being 4 seconds. If the test is not finished within that time, it is killed and logged as an error.
The timeout for a specific test case, which may contain multiple unit
tests, can be changed with the tcase_set_timeout()
function.
The default timeout used for all test cases can be changed with the
environment variable CK_DEFAULT_TIMEOUT
, but this will not
override an explicitly set timeout. Another way to change the timeout
length is to use the CK_TIMEOUT_MULTIPLIER
environment variable,
which multiplies all timeouts, including those set with
tcase_set_timeout()
, with the supplied integer value. All timeout
arguments are in seconds and a timeout of 0 seconds turns off the timeout
functionality. On systems that support it, the timeout can be specified
using a nanosecond precision. Otherwise, second precision is used.
Test timeouts are only available in CK_FORK mode.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The term code coverage refers to the extent that the statements of a program are executed during a run. Thus, test coverage refers to code coverage when executing unit tests. This information can help you to do two things:
Check itself does not provide any means to determine this test
coverage; rather, this is the job of the compiler and its related
tools. In the case of gcc
this information is easy to
obtain, and other compilers should provide similar facilities.
Using gcc
, first enable test coverage profiling when
building your source by specifying the ‘-fprofile-arcs’ and
‘-ftest-coverage’ switches:
$ gcc -g -Wall -fprofile-arcs -ftest-coverage -o foo foo.c foo_check.c
You will see that an additional ‘.gcno’ file is created for each
‘.c’ input file. After running your tests the normal way, a
‘.gcda’ file is created for each ‘.gcno’ file. These
contain the coverage data in a raw format. To combine this
information and a source file into a more readable format you can use
the gcov
utility:
$ gcov foo.c
This will produce the file ‘foo.c.gcov’ which looks like this:
-: 41: * object */ 18: 42: if (ht->table[p] != NULL) { -: 43: /* replaces the current entry */ #####: 44: ht->count--; #####: 45: ht->size -= ht->table[p]->size + #####: 46: sizeof(struct hashtable_entry);
As you can see this is an annotated source file with three columns: usage information, line numbers, and the original source. The usage information in the first column can either be ’-’, which means that this line does not contain code that could be executed; ’#####’, which means this line was never executed although it does contain code—these are the lines that are probably most interesting for you; or a number, which indicates how often that line was executed.
This is of course only a very brief overview, but it should illustrate how determining test coverage generally works, and how it can help you. For more information or help with other compilers, please refer to the relevant manuals.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
It is possible to determine if any code under test leaks memory during a test. Check itself does not have an API for memory leak detection, however Valgrind can be used against a unit testing program to search for potential leaks.
Before discussing memory leak detection, first a "memory leak" should be better defined. There are two primary definitions of a memory leak:
Valgrind uses the second definition by default when defining a memory leak. These leaks are the ones which are likely to cause a program issues due to heap depletion.
If one wanted to run Valgrind against a unit testing program to determine if leaks are present, the following invocation of Valgrind will work:
valgrind --leak-check=full ${UNIT_TEST_PROGRAM} ... ==3979== LEAK SUMMARY: ==3979== definitely lost: 0 bytes in 0 blocks ==3979== indirectly lost: 0 bytes in 0 blocks ==3979== possibly lost: 0 bytes in 0 blocks ==3979== still reachable: 548 bytes in 24 blocks ==3979== suppressed: 0 bytes in 0 blocks
In that example, there were no "definitely lost" memory leaks found.
However, why would there be such a large number of "still reachable"
memory leaks? It turns out this is a consequence of using fork()
to run a unit test in its own process memory space, which Check does by
default on platforms with fork()
available.
Consider the example where a unit test program creates one suite with one test. The flow of the program will look like the following:
Main process: Unit test process: create suite srunner_run_all() fork unit test unit test process created wait for test start test ... end test ... exit(0) test complete report result free suite exit(0)
The unit testing process has a copy of all memory that the main process
allocated. In this example, that would include the suite allocated in
main. When the unit testing process calls exit(0)
, the suite
allocated in main()
is reachable but not freed. As the unit test
has no reason to do anything besides die when its test is finished, and
it has no reasonable way to free everything before it dies, Valgrind
reports that some memory is still reachable but not freed.
If the "still reachable" memory leaks are a concern, and one required that
the unit test program report that there were no memory leaks regardless
of the type, then the unit test program needs to run without fork. To
accomplish this, either define the CK_FORK=no
environment variable,
or use the srunner_set_fork_status()
function to set the fork mode
as CK_NOFORK
for all suite runners.
Running the same unit test program by disabling fork()
results
in the following:
CK_FORK=no valgrind --leak-check=full ${UNIT_TEST_PROGRAM} ... ==4924== HEAP SUMMARY: ==4924== in use at exit: 0 bytes in 0 blocks ==4924== total heap usage: 482 allocs, 482 frees, 122,351 bytes allocated ==4924== ==4924== All heap blocks were freed -- no leaks are possible
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Check supports an operation to log the results of a test run. To use
test logging, call the srunner_set_log()
function with the name
of the log file you wish to create:
SRunner *sr; sr = srunner_create (make_s1_suite ()); srunner_add_suite (sr, make_s2_suite ()); srunner_set_log (sr, "test.log"); srunner_run_all (sr, CK_NORMAL);
In this example, Check will write the results of the run to
‘test.log’. The print_mode
argument to
srunner_run_all()
is ignored during test logging; the log will
contain a result entry, organized by suite, for every test run. Here
is an example of test log output:
Running suite S1 ex_log_output.c:8:P:Core:test_pass: Test passed ex_log_output.c:14:F:Core:test_fail: Failure ex_log_output.c:18:E:Core:test_exit: (after this point) Early exit with return value 1 Running suite S2 ex_log_output.c:26:P:Core:test_pass2: Test passed Results for all suites run: 50%: Checks: 4, Failures: 1, Errors: 1
Another way to enable test logging is to use the CK_LOG_FILE_NAME
environment variable. When set tests will be logged to the specified file name.
If log file is specified with both CK_LOG_FILE_NAME
and
srunner_set_log()
, the name provided to srunner_set_log()
will
be used.
If the log name is set to "-" either via srunner_set_log()
or
CK_LOG_FILE_NAME
, the log data will be printed to stdout instead
of to a file.
4.12.1 XML Logging | ||
4.12.2 TAP Logging |
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The log can also be written in XML. The following functions define the interface for XML logs:
void srunner_set_xml (SRunner *sr, const char *fname); int srunner_has_xml (SRunner *sr); const char *srunner_xml_fname (SRunner *sr);
XML output is enabled by a call to srunner_set_xml()
before the tests
are run. Here is an example of an XML log:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="http://check.sourceforge.net/xml/check_unittest.xslt"?> <testsuites xmlns="http://check.sourceforge.net/ns"> <datetime>2012-10-19 09:56:06</datetime> <suite> <title>S1</title> <test result="success"> <path>.</path> <fn>ex_xml_output.c:10</fn> <id>test_pass</id> <iteration>0</iteration> <duration>0.000013</duration> <description>Core</description> <message>Passed</message> </test> <test result="failure"> <path>.</path> <fn>ex_xml_output.c:16</fn> <id>test_fail</id> <iteration>0</iteration> <duration>-1.000000</duration> <description>Core</description> <message>Failure</message> </test> <test result="error"> <path>.</path> <fn>ex_xml_output.c:20</fn> <id>test_exit</id> <iteration>0</iteration> <duration>-1.000000</duration> <description>Core</description> <message>Early exit with return value 1</message> </test> </suite> <suite> <title>S2</title> <test result="success"> <path>.</path> <fn>ex_xml_output.c:28</fn> <id>test_pass2</id> <iteration>0</iteration> <duration>0.000011</duration> <description>Core</description> <message>Passed</message> </test> <test result="failure"> <path>.</path> <fn>ex_xml_output.c:34</fn> <id>test_loop</id> <iteration>0</iteration> <duration>-1.000000</duration> <description>Core</description> <message>Iteration 0 failed</message> </test> <test result="success"> <path>.</path> <fn>ex_xml_output.c:34</fn> <id>test_loop</id> <iteration>1</iteration> <duration>0.000010</duration> <description>Core</description> <message>Passed</message> </test> <test result="failure"> <path>.</path> <fn>ex_xml_output.c:34</fn> <id>test_loop</id> <iteration>2</iteration> <duration>-1.000000</duration> <description>Core</description> <message>Iteration 2 failed</message> </test> </suite> <suite> <title>XML escape " ' < > & tests</title> <test result="failure"> <path>.</path> <fn>ex_xml_output.c:40</fn> <id>test_xml_esc_fail_msg</id> <iteration>0</iteration> <duration>-1.000000</duration> <description>description " ' < > &</description> <message>fail " ' < > & message</message> </test> </suite> <duration>0.001610</duration> </testsuites>
XML logging can be enabled by an environment variable as well. If
CK_XML_LOG_FILE_NAME
environment variable is set, the XML test log will
be written to specified file name. If XML log file is specified with both
CK_XML_LOG_FILE_NAME
and srunner_set_xml()
, the name provided
to srunner_set_xml()
will be used.
If the log name is set to "-" either via srunner_set_xml()
or
CK_XML_LOG_FILE_NAME
, the log data will be printed to stdout instead
of to a file.
If both plain text and XML log files are specified, by any of above methods, then check will log to both files. In other words logging in plain text and XML format simultaneously is supported.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The log can also be written in Test Anything Protocol (TAP) format. Refer to the TAP Specification for information on valid TAP output and parsers of TAP. The following functions define the interface for TAP logs:
void srunner_set_tap (SRunner *sr, const char *fname); int srunner_has_tap (SRunner *sr); const char *srunner_tap_fname (SRunner *sr);
TAP output is enabled by a call to srunner_set_tap()
before the tests
are run. Here is an example of an TAP log:
ok 1 - mytests.c:test_suite_name:my_test_1: Passed ok 2 - mytests.c:test_suite_name:my_test_2: Passed not ok 3 - mytests.c:test_suite_name:my_test_3: Foo happened ok 4 - mytests.c:test_suite_name:my_test_1: Passed 1..4
TAP logging can be enabled by an environment variable as well. If
CK_TAP_LOG_FILE_NAME
environment variable is set, the TAP test log will
be written to specified file name. If TAP log file is specified with both
CK_TAP_LOG_FILE_NAME
and srunner_set_tap()
, the name provided
to srunner_set_tap()
will be used.
If the log name is set to "-" either via srunner_set_tap()
or
CK_TAP_LOG_FILE_NAME
, the log data will be printed to stdout instead
of to a file.
If both plain text and TAP log files are specified, by any of above methods, then check will log to both files. In other words logging in plain text and TAP format simultaneously is supported.
[ << ] | [ < ] | [ Up ] | [ > ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Check supports running test suites with subunit output. This can be useful to combine test results from multiple languages, or to perform programmatic analysis on the results of multiple check test suites or otherwise handle test results in a programmatic manner. Using subunit with check is very straight forward. There are two steps: 1) In your check test suite driver pass ’CK_SUBUNIT’ as the output mode for your srunner.
SRunner *sr; sr = srunner_create (make_s1_suite ()); srunner_add_suite (sr, make_s2_suite ()); srunner_run_all (sr, CK_SUBUNIT);
2) Setup your main language test runner to run your check based test executable. For instance using python:
import subunit class ShellTests(subunit.ExecTestCase): """Run some tests from the C codebase.""" def test_group_one(self): """./foo/check_driver""" def test_group_two(self): """./foo/other_driver"""
In this example, running the test suite ShellTests in python (using any test runner - unittest.py, tribunal, trial, nose or others) will run ./foo/check_driver and ./foo/other_driver and report on their result.
Subunit is hosted on launchpad - the subunit project there contains bug tracker, future plans, and source code control details.
[ << ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This document was generated on August 8, 2020 using texi2html 5.0.