Don’t be afraid of Test-Driven Development

April 26, 2019

Throughout my career, the teams I’ve been on have had a wide range of views on using tests while developing code. As I’ve moved between teams, listened to podcasts, and read articles, I’ve assembled some notes that would have been very useful for me in the past. Some of these notes are getting compiled into a bookIf this sounds interesting, sign up for free content and a discount. that I’ll be selling later this year.

Even though I could include this in that book, I realize that I took much longer than I should have to get started with testing my code. This is mostly because I was intimidated with rules I felt I had to follow; rules that I had to get past in order to let myself explore, try, fail, and then succeed.

If you aren’t testing your code, I want to share this post with the hope that you too may find enjoyment form it.

Iterate, iterate, iterate

First off, remember that testing, just like writing software, is an iterative process.Or at least it should be!

Whenever you developing code or write tests, chances are that the first thing you’re going to write is going to get edited or completely replaced. So try to not let yourself be intimidated by the stress of “doing it correctly” or not knowing what to test. Write down what you know right now, and update it later, when you have a clearer picture of what’s going on.

Second off, write tests first!!!

Write tests... FIRST!

I understand the feeling of not knowing what you are going to test before any software is written. But this is exactly the time to think about what you want to write. If you can’t think of what you’re going to write, at least write a test that assert False . This is literally how I’ve started several coding sessions, just to get the juices running.

Once I had a test that was failing, my mind was more able to think about what the project really needed. I don’t have the problem of the blank page anymore. I can think about how I want to call this function, what it should return, and if there’s some better way to do the thing.

While I had been lenient about when to write tests, I have found I have been able to better reason about what needs to be developed as I write the tests, especially before I write the code.

Also, when you get in the practice of writing a test first, it helps you to keep doing it. As I’ve progressed, I have found it more and more important to write tests first. First, for thinking about what you need. Second, for making sure it works.

I’ve learned this lesson a few times over the years, but this lesson has become glaringly clear to me recently, as one of the developers on my team created one function that would sync our data with a remote service over their API, all without writing a test.

After he wrote it, the two of us worked together to make sure it was ready to deploy. We slowly built up a few tests and spent days debugging this one function. We finished and pushed our code to staging.

Before long, a bug appeared in its functionality. Before jumping into the code, we spent time thinking about what could cause it and wrote one test case.

We were relieved when that test duplicated the error, and it helped us fix it in under three hours. On top of that, not only did we have the confidence that it would work, we also felt good that nothing else would be affected by our new code.

A real-life example: Daily report

I want to encourage you to seriously try testing your own code. I was worried for years that I was doing it wrong. I had no idea what would make a good test, and that’s okay. We all go through this.

I think a good test would be, at the very least, a written test that tests something the code does.

You can improve it over time.

Here’s a real-world example from the project we launched.

This project has a feature that creates a report every day. This is the first test that was committed:

def test_daily_report(db_session):
    response = CommonPolicyService.daily_report()
    assert 'daily_report' in response

In this project, we use pytest to test our code. The db_session in the function parameters is what pytest calls a “fixture”. In this case, it creates a new in-memory database the test will use.

The test then runs the daily_report function, which returns a string that represents the absolute path to the report file. The test makes sure the phrase 'daily_report' is in that path.

That’s it! The developer responsible for this piece of code knew they could return that path, and at least for the first try, it was good enough; even if it didn’t test for the existence of that file.

The next improvement to the test was a substantial one:

def test_daily_report(db_session, load_n_policies, monkeypatch):
    load_n_policies(10)
    monkeypatch.setattr(CommonPolicyService, '_send_to_sharefile',
                        lambda path: print(f'Preventing file {path} sending to sharefile.'))
    file_path = CommonPolicyService.daily_report()
    try:
        assert 'daily_report' in file_path
        assert os.path.exists(file_path)
        with open(file_path) as f:
            header = f.readline()
            assert 'State' in header
            for line in f.readlines():
                assert 'Inland' in line
    finally:
        os.remove(file_path)

This time, we have three fixtures for the test. load_n_policies gives us a random selection of (in this case) 10 policies added to the database.

monkeypatch is a wonderful fixture that lets us overwrite something in the environment with something else. In this case, daily_report not only generates the report, but sends it to a server via the _send_to_sharefile function. We’re able to replace that with a lambda function, to prevent us from sending test reports over to the server every time we run this test.

Next, the this version of the test checks that the file exists, opens the file, and checks some of the content, and deletes the file, whether the test passes or not.

This was a significant improvement. We were no longer sending fake reports to the shared server, we were testing the file existed, and we were even making sure we had content in the file. However, it wasn’t testing everything we needed.

The version of the test that exists at the time I’m writing this is more robust.

@pytest.mark.offline
@pytest.mark.report
def test_daily_report(db_session, load_n_policies, monkeypatch):
    load_n_policies(10)
    monkeypatch.setattr(CommonPolicyService, 'send_to_sharefile',
                        lambda path: print(f'Preventing file {path} sending to sharefile.'))
    file_path = CommonPolicyService.fanshield_daily_report()
    try:
        assert 'daily_report' in file_path
        assert os.path.exists(file_path)

        policies = db_session.query(CommonPolicy.policy_number).all()
        policy_nums = set(p.policy_number for p in policies)

        with open(file_path, newline='') as csvfile:
            reader = csv.DictReader(csvfile)
            for row in reader:
                assert (row['Policy Number'] in policy_nums)
                policy_nums.remove(row['Policy Number'])
    finally:
        os.remove(file_path)

This version adds pytest “marks”, which give us a way to run a certain subset tests. In this case, if we tell pytest to run “offline” tests or test dealing with “report” functionality, it will run this test.

But the best improvement is that we’re now checking to make sure each policy number in the report is also in our database.

Writing this now, I see we’re not checking to see that there are no policies left over after we iterate through the report. I’ll be adding that check tomorrow.

Now it’s your turn

I strongly suggest you start testing your code the next chance you get.

I’ve heard from many developers who enjoy it. It’s given me confidence and excitement with the code I’ve written, and I want you to experience that too. Give it a try, and let me know if you run into issues. I’d like to help as I can.

← Read other articles