Some classes interact with a database via a connection object. A connection object is a class which receives an sql statement, performs the query and returns the result.
How to test the data-access class? You could mock the connection and set sql expectations something like this:
<?php
class MyTestCase extends UnitTestCase {
function testSomething() {
...
...
$mock_connection->expectOnce(
'query',
array('select foo from bar'));
}
However, there is a big problem. The expectOnce call performs a string comparison, comparing the expected and actual sql, and this is far too sensitive. Different strings can be semantically identical as sql. For example:
Thus, tests may fail when they should pass. False negatives such as these are a nuisance. Although the tests are complaining, the code is actually working fine.
In test-first programming a new behaviour is specified in a (failing) test and then some code is written to make this test pass. If you were using a mocked connection, an sql query would first be defined in a mock expectation and then a class is written to satisfy this expectation, producing exactly the same sql. That's all very self-referential. Nothing ever gets near a real database. In use, when the code eventually does collide with the target schema, will the query cause the desired change or fetch the correct data? We don't know. We don't even know if it's valid sql. If a bad query were specified, and the code produces the same, the test would still pass.
You might think this won't matter. You know how to write sql, don't you? For trivially simple queries you probably do but take a look at the example below. Could you really write a mock object expectation like this off the top of your head? It would be impossible to check without running it past a real database.
Even if we're sure the query is a valid sql statement it could be inadequate in some way in the context of the target schema. Perhaps a "foo" table doesn't exist. Perhaps a tinytext field is just too tiny and data is being truncated. If we mock we just don't know how our queries will behave. Maybe the query was initially correct but now someone has altered a table definition, inadvertently breaking something in the app. The tests don't touch the real database and so they'll still pass. There is no failing test to give you instant feedback — but there should be.
The first problem (different-string-same-query) can produce false negatives. This one, a false positive, is much worse. There could be a problem, possibly a very serious one, but no warning will be given. I daresay you'll find out eventually, when all the April financial records go missing and you discover a series of obscene messages from your boss on the answer phone.
Still not convinced if mocking a db connection is a bad idea? There are two versions of the same query below (a real query taken from Rephactor). One is a mock expectation and the other is the actual sql received by the mock. They are not equal and hence you've got a failing test to fix. Can you spot the difference?
# THE MOCK EXPECTATION: select path as path, group_concat(offset) as offsets from files inner join (select c.name, c.fid, m.name_offset as offset from classes as c inner join methods as m using(cid) where m.name='changeMe' and c.hid in ( select distinct c.hid from classes as c inner join classes_interfaces as ci using(cid) inner join ( select candidates.iid, candidates.name from ( select distinct i.iid, i.name from interfaces as i inner join classes_interfaces as ci using(iid) inner join classes as c using(cid) where find_in_set(c.name, ( select group_concat(name) as members from classes as c group by hid having find_in_set('Foo', members)))) as candidates inner join methods as m using(iid) where m.name='changeMe') as links using(iid)) union select ifo.name, ifo.fid, m.name_offset as offset from ( select distinct i.iid, i.name, i.fid from interfaces as i inner join classes_interfaces as ci using(iid) inner join ( select c.cid, c.name from classes as c where c.hid in ( select distinct c.hid from classes as c inner join classes_interfaces as ci using(cid) inner join ( select candidates.iid, candidates.name from ( select distinct i.iid, i.name from interfaces as i inner join classes_interfaces as ci using(iid) inner join classes as c using(cid) where find_in_set(c.name, ( select group_concat(name) as members from classes as c group by hid having find_in_set('Foo', members)))) as candidates inner join methods as m using(iid) where m.name='changeMe') as links using(iid))) as clh on (clh.cid=ci.cid)) as ifo inner join methods as m using(iid) where m.name='changeMe') as items using(fid) group by fid order by path
# THE ACTUAL SQL PRODUCED BY THE TESTED CLASS: select path as path, group_concat(offset) as offsets from files inner join (select c.name, c.fid, m.name_offset as offset from classes as c inner join methods as m using(cid) where m.name='changeMe' and c.hid in ( select distinct c.hid from classes as c inner join classes_interfaces as ci using(cid) inner join ( select candidates.iid, candidates.name from ( select distinct i.iid, i.name from interfaces as i inner join classes_interfaces as ci using(iid) inner join classes as c using(cid) where find_in_set(c.name, ( select group_concat(name) as members from classes as c group by hid having find_in_set('Foo', members)))) as candidates inner join methods as m using(iid) where m.name='changeMe') as links using(iid)) union select ifo.name, ifo.fid, m.name_offset as offset from ( select distinct i.iid, i.name, i.fid from interfaces as i inner join classes_interfaces as ci using(iid) inner join ( select c.cid, c.name from classes as c where c.hid in ( select distinct c.hid from classes as c inner join classes_interfaces as ci using(cid) inner join ( select candidates.iid, candidates.name from ( select distinct i.iid, i.name from interfaces as i inner join classes_interfaces as ci using(iid) inner join classes as c using(cid) where find_in_set(c.name, ( select group_concat(name) as members from classes as c group by hid having find_in_set('Foo', members)))) as candidates inner join methods as m using(iid) where m.name='changeMe') as links using(iid))) as clh on (clh.cid=ci.cid)) as ifo inner join methods as m using(iid) where m.name='changeMe') as items using(fid) group by fid order by path
The second version has just a single extra space and that's enough to throw a string expectation. Did you spot it? You will have to if you go the mock route.
Even when you get this test passing you're not much better off, as explained above. With a mocked connection, queries are simply being plucked out of thin air and asserted as fact without any supporting evidence. It may seem like they're being tested but a vital part of the application logic has completely escaped from the test harness.
The fundamental probem is the information density of the sql statement: a series of potentially complex instructions for a database server. It's not a simple string so why treat it like one? A string equality check on an sql query is like testing a php class by asserting a class definition character-by-character read from a file rather than instantiating an object and putting it through its paces. Crazy.
Mocking would work if you could write an SqlExpectation class. This would have to duplicate every last detail of the target dbms — definitely a non-trivial job ;) Further, different databases all have their own opinions on what constitutes valid sql and so you'd also need a new SqlExpectation class for each one which you need to support (admittedly there would be some overlap). You'd even need new flavours for different versions of the same dbms.
In practice, this is an impossible task.
Luckily there's a foolproof query parser all ready to go: the database server itself. Forget about mocking and use a real connection object and a real database.
This requires some extra work to create fixtures. You'll need to create databases, tables, and sample data in setUp. Be careful to drop it all again in tearDown: each new test must always begin from a clean slate. Some people simply truncate tables between tests to get rid of fixture data. Personally I prefer to drop the whole database. It's simpler and less error prone.
Obviously you'll need to use a schema which is identical to the production database. For example, I usually keep schema dumps in version control and use these to build fixtures. An installer/fixture class will simply read through a given folder, creating a table for each sql file found there. Of course you will also need to work out a system for updating the production database when the schema changes. Hopefully that won't happen too often...
Note that the $schema->install() code which you write for database fixtures can often be used in installer scripts, and vice versa.
People often worry about the speed of these kinds of tests. Personally I've never found that to be a big problem although this will depend on the size of your test suite. Either way it doesn't really matter because you don't have a choice. Code must be properly tested and that goes double for anything which touches a database. Data is precious.
Sometimes these are called integration tests rather than units. The tests have the same narrow focus on "units" of behaviour exactly like the mocked-connection counterparts which I've been criticising so heavily and that's how I think of them. However, because they use an external resource (the database), you could also say they are integration tests.
Finally, note that the advice not to mock specifically refers to mocking connection objects in unit tests of data access classes. It's OK to mock the data access objects themselves in other tests.
Watch out for connection state: affected rows, current error, currently used database etc. If the connection used for fixture creation is also passed to the tested class, connection state left over from fixture set up queries could contaminate the test. Similarly, a series of tests can contaminate each other if different instances of the tested class all use the same connection.
In practice, the risk is low but it does exist. Data is precious and anything to do with data storage ought to go the extra mile.
Connections should always be closed when you're finished with them. If not, you could end up with hundreds on the go in a large test suite and this can crash the db server.
What sort of data should you test with? A unit test ought to make a specific point and usually you only need some minimal data to do that. For example, you could try saving an O'Reilly value to see if sql strings are being escaped. A micro fixture might create just a couple of rows of data rather than the tens of thousands found in the production database. All you need to make a point.
Real data is more complex and sometimes this can reveal requirements which you have failed to assert. However it's unpredictably complex — it might not contain an O'Reilly at all. Therefore I think it's best to use your own sample data in unit tests. These should be small, simple fixtures finely tuned to express a specific requirement. Later, if you find something you've missed, just add a new test.
Make sure that the setUp/tearDown cycle always clears up properly after the previous test. The simplest and most reliable way to do this is just to drop the entire database between tests. There can be no doubt with that. The last thing you want is to get bogged down debugging the tests themselves.
If you're testing with a real database server as described above databases, tables and data are created and destroyed in setUp() and tearDown(). The word "destroyed" should get your careful attention. If there are other projects on the server a simple mistake with a database name could lead to the wrong item being dropped at tear down. That may not be so bad on a local development box (you do backup regularly..?) but not if you want to run some tests on a production server where real, live data is at risk. And your job.
Since the tested code could be malfunctioning in almost any way imaginable, ideally some kind of sandbox should be set up on the db server to protect non-fixture data. One way to do this is to decorate the standard connection with some additional authorisation checks. There is one simple rule: deny any query which attempts to manipulate something which was not created by the test case (ie the non-fixture dbs, tables and data). Read-only queries don't need to be restricted.
Once you have some authorisation code set up to monitor creates and drops, it's fairly easy to add an automatic fixture clear up feature. The authorisation checks will need to intercept sql queries and identify their type. Whilst you're doing all that, some kind of tally object can listen out for (successfully executed) creates and drops. At tearDown, anything left in the tally can be dropped.
Newly-written code will have the odd fatal error and this will end the script before it has a chance to clean up. A further refinement is to persist the tally state so that the next test run can clear up any vestigial fixtures.
The above ideas have been incorporated into the database fixture tools.