Some marketers say testing isn’t effective. They say, “Look, I’ve tried testing and I haven’t seen any results. I haven’t seen any reason to spend staff time, effort, thought, etc. on testing. Sure, it works for other companies – I’ve seen those case studies, but my list is different.”
If you have heard someone say that, you are (un)lucky enough to know a crabby, crappy marketer.
“Cody,” you might shout, “Though I am not one of the aforementioned marketers, don’t be mean! They might be good marketers with a unique list that can’t be helped via testing.”
Nay! For everyone’s list can be continuously improved via testing (says I).
How did these crabby, crappy marketers become so pessimistic?
Perhaps our crappy marketer did this:
One day, they read a case study that showed a 50% increase in opens for Company X. Company X simply changed their email send time from early morning to overnight, and saw the dramatic increase. So crappy marketer gets an idea – “I always send my emails at the same time and same day,” he thinks. “I will send to my list at a different time of day and likewise see a 50% increase in revenue.”
Sounds good, but his logic is flawed. For two reasons – in this case, the crappy marketer had simply skipped to the results portion of the original case study and didn’t read that Company X planned this test after several weeks of preliminary testing. Before creating the test, they began the process by doing an analysis of their website traffic and found that most of the people interested in their product were young, phone owning, wealthy people – ie. their email subscribers likely check their phones as soon as they wake up.
But that wasn’t all the excellent Company X team did – instead of sending to their 2 million person list all at once for the first time, they decided to do a small segment a week ahead of the big test. They wanted to make sure the segment they sent to was statistically significant enough to matter, but not so significant that if it was a huge failure, their boss would be upset because they lost their company a year’s worth of their salary in one email. They decided on sending at the new time to only 10% of their list this first time. But of course, the crappy marketer skipped that preliminary test and also skipped the part in the case study where the preliminary results showed a tremendous increase in revenue for that 10% segment. Uh oh…
So after skipping the preliminary segment test, skipping the website traffic demographic review, and skipping off to work to send out their first test (but of course this test only tests their luck since they had missed so many best practices), they set up their weekly email to go out at 6 am the following morning.
Upon waking up, their 400,000 older, less affable, and less tech-savvy email subscribers got up and went to the bathroom and took a shower. They had their morning coffee, read the newspaper and drove to work – having still not checked their email. Meanwhile, a dozen other marketing teams have been sending those subscribers even more email, and crappy marketer’s email is moving down the inbox – alone and hungry for opens. When crappy marketer comes in that morning (after checking his email first thing when he woke up), he found out that his email wasn’t doing very well. It seems his opens and clicks were down 75% and most importantly, he has to explain to his boss why revenue is down 75% for the mailing. Drat! The lesson our crappy marketers learned from all of this — TESTING DOESN’T WORK!
Don’t be a crabby, crappy marketer. Follow best practices when testing. While not every test will result in a 50% increase in revenue, no single test is likely to cause a drastic revenue decrease. And one more thing – take heart. You have a real competitive advantage as there are crabby, crappy marketers hiding in plain sight in nearly every market that think testing doesn’t work on ‘their list.’
Have you tested your way to greatness? Tell us the story.