Caveat emptor. That applies to B2B marketing executives considering lead generation/demand management solutions. Here’s why:
A couple of weeks ago, I received an email wanting to show me “how 5000 customers increased leads by 420%.” Reading further, the sender said an “independent MIT study” backed up the claim. Former Forrester analyst that I am, I had to think “This sounds too good to be true.”
So I downloaded and read the report. It’s actually quite good — far better than other research, whitepapers, and such I’ve seen. But what bothered me was how the vendor-sponsor exaggerated the results to a degree that I felt was unnecessary. From my experience, I’d like to point out common faults in how the sponsor conveyed they findings. These faults and exaggerations can confuse readers and make the results appear suspect. Which is too bad, because the findings are relevant and speak to the power of lead management automation.
When you read similar reports, always ask the following to keep from falling into the (sometimes obvious) traps vendors can set:
1) What’s the base? Knowing how many people, companies, etc. represented in the research is crucial. You should also know whether the sample represents a demographic similar or different from you. This research hedges here. A lot.
The headline to the email says “5000 customers” — the report states “214 respondents” to the survey (page 24) and is unclear about how many participants made up the sample used for Web site analysis of visitors and leads. Upon further inspection, you find that 93% of the sample include companies with 200 employees or fewer, with almost 75% having 25 employees or fewer. If you work at a big company like I do, then these results don’t apply as much as the results would if you were at a small start-up.
2) How do the overall findings hold up when you scrutinize the data? You need to compare the “headline” findings to the details to see if the research substantiates the claims. Or to see if there is something unique about the data that influences the results.
In this research, small numbers have a big effect on percentages and multipliers. Basically, if your Web site starts out gaining two leads in the first month, using the vendor’s technology will help to generate 80+ leads about 1 year later (see Figure 5). While not stated clearly in the report, the email author probably gets to the “4.2 times more leads” number using this calculation. As you can see, the claim leaves out important details.
Which is unfortunate because the survey results (page 12) show that 83% of 214 respondents said they saw an increase in leads since implementing the “inbound marketing software” and 32% of those said they saw their leads increase by 50%. Again, it would be good to know the demographics of those respondents and how long it took them to achieve those results. The report explores neither of those factors to my satisfaction.
3) How credible is the source/authors? Beyond looking at whether the methodology holds water, it’s important to understand who wrote the research, who sponsored it, and how the project might have come about in the first place.
While the email claims “independent MIT study” the reality is that two MBA graduate degree candidates probably did this work for a class project. I don’t mean any disrespect to Mr. Paisner and Ms. Derosiers. They appear to be excellent students judging by the clear, concise layout of the report. Both deserve an “A” grade from their respective professors at MIT and Babson. But linking the MIT brand to this report in this manner is a bit of a stretch.
If I were to guess, I would say that the two have some personal connection to Hubspot – maybe working there as interns. The source of the project was likely more casual than a formal solicitation of research from MBA graduate programs by Hubspot. This is the type of skepticism you should apply when questioning the origin of research you read.
What’s the bottom line? I think the lead generation/automation market remains very competitive. Too competitive for the growing opportunity it still represents. Wanting to catch the eye of busy marketing executives, the vendors find it very tempting to “highlight” customer results and show that their technology can substantially improve marketing’s impact on the business. But don’t believe everything you read. Marketing automation success depends on hard work, getting your processes right, and on steady, effective content production. The technology choice is a distant fourth on the list.
I would be interested in hearing about your experience. Feel free to post a comment here sharing examples of how you’ve seen vendors stretch the truth — a little or a lot.
(Full disclosure: While I was an analyst at Forrester, I reviewed Hubspot’s technology — among others — for my Lead Management Automation market overview. I also know Brian Halligan - he spoke at one of my B2B marketing seminars a few years ago. I think Hubspot has an excellent software product well-suited for the small business market. I also think their company performance continues to meet and exceed market expectations.)