However, as noted by Dijkstra, testing can never proof the absence of bugs, it can only proof the presence of bugs. Testing is more or less the same as looking for a counter example: you only need one counter example to refute a proof (http://www.amazon.com/Proofs-Refutations-Logic-Mathematical-Discovery/dp/0521290384/ref=sr_1_1?ie=UTF8&qid=1333823887&sr=8-1). The same holds for test driven development (TDD): unit testing will never proof that you met a requirement, it will only demonstrate that you did not meet a requirement. In case of the inclusion in your test path of blackbox components that do not have proven deterministic behavior (and that occurs more often than you might think), the test only reflects the outcome for that single run, but does not give any kind of indication for future runs (in which case you need repeated samples to provide a statistical confidence level that the requirement has been met). Test driven development also requires significant effort: since the test is first, there is no tooling that can help you derive the tests. And the argument of high code coverage through test driven development is somewhat flawed, since I have seen many developers focus on 'positive paths' as described in the requirements.
As mentioned in the following article http://leansoftwareengineering.com/2007/04/24/start-with-something-simple/ , TDD can be considered DbC for dummies and DbC is formal specification for dummies. I like DbC better than TDD, since TDD is looking for a counter example, where DbC actually focuses on defining the contracts. DbC also offers the potential for tooling to automatically derive test cases (Pex for example) and automatically checking for contract violations.
So step up on the evolutionary ladder of dummies to DbC and keep TDD as a complementary approach.