Using data to validate start-up ideas — it’s not as difficult as you think it may be
Welcome back to our blog, this week we bring you a guest contribution piece from Eli Schwartz from SurveyMonkey. Here’s something about Eli before we go onto his piece:
Eli Schwartz is the Director of Marketing, APAC for SurveyMonkey, the world’s largest online survey platform. He oversees SurveyMonkey’s marketing efforts in the Asia Pacific region. In addition, he leads the company’s global SEO efforts and strategies across 17 languages. SurveyMonkey serves over 25 million customers worldwide, including 99% of the Fortune 500, and collects over 3 million online survey responses daily.
Read any good startup story and it typically begins with the founder(s) identifying a particular problem and then developing a solution encased (hopefully) within a business model. A creative way of solving a challenge, of course, does not guarantee the success of a startup; for that you need successful execution and mass adoption. Achieving mass adoption requires that a) many people have the same problem b) the solution you developed works for enough of the masses. To discover whether there is any mass appeal you are going to need to do market research and validation. Pushing forward with an idea without any validation is a fool’s errand unless you are super lucky or just have Steve Job’s level of intuition.
The challenge many founders will face when doing their research is that they are so excited about their ideas that they inevitably let their own assumptions and biases creep into their research process. Ineffective research will lead to misguided validation and that is how things like the Ionic Ear end up getting created. It’s not always so easy for the founders to step back and see their own biases when doing their research, so here are six basic steps to follow that can help keep you on track and give you solid data with which to build your foundation.
1) Use surveys. While user interviews are great for getting solid feedback from actual potential users, it isn’t scalable nor is it very conducive for the types of quantitative analysis you might need to get investor backing. The better alternative to user interviews is a standardized online survey that is sent to all of your research subjects. An online survey allows you to quickly compare your collected feedback and make decisions. For simple data collection a tool like Google Docs might be enough, but for more question and analysis options you should use a more advanced tool like SurveyMonkey (disclosure: I work for SurveyMonkey.)
Ever since the advent of psychology experts having been using questionnaires to determine behavioral tendencies and technology innovations should be no different. If your survey is designed correctly you should be able to ascertain how people will respond in different scenarios without needing to ask them directly how they would respond. If you were developing a brand new financial product, you can’t expect people to respond honestly as to whether they will use it. Instead, you could ask them about their feeling towards existing products on the market and infer how they might deal with your innovation. As an example, this check website discovered that people think checks are more secure than every form of payment other than cash and used that to hone their marketing pitch on security.
2) Do not ask loaded questions. A loaded question is where your question push people into a response they would not have given on their own. An example of a loaded question could be “how much do you hate when your phone rings during a meeting?” You are assuming that the interviewee has the same sentiment you do towards phones ringing during meetings and you are asking them to express their displeasure on your scale rather than finding out if they even share your feelings. A better way to ask this question is “Do you dislike when your phone rings during a meeting”. A follow up question can be “Can you elaborate on how much you dislike phones ringing in meetings?”
It’s possible that in your own enthusiasm towards an idea you may unconsciously bias the questions. The best way to get around this challenge is to have someone proofread the survey and specifically look out for bias. Aside from bias in your data poisoning your results, you definitely don’t want to go down a path of building something that was based on faulty logic.
3) Use only one variable in each question. Having more than one variable in a question will lead to answers that are very difficult to analyze. A bad question might be “Do you have a smartphone and will you buy a smart watch this year? You cannot know whether the user is answering based on smart phone or smart watch. A better way of asking this question would be to break it into two questions.
Aside from making it easier for the respondent to understand the question, it will also be a lot easier analyze the data. If there’s only one side of the question that you need to analyze, you can just filter it away. Example: Question 1, do you have a smartphone? Question 2, have you considered buying a smart watch this year. You can ignore all the people that might have answered no to smartphone, but yes to smart watch. These might be very interesting people to understand.
4) Use Likert scales. For questions that cannot be bi-polar, the best way to analyze degrees of an opinion is to use a Likert scale. An example of a Likert scale answer you have probably seen before is a question that asks you how likely you are to take a specific action with answers ranking from “not at all” all the way to “extremely likely.” Having a range of responses might be the best way to really validate whether the marketplace is as bothered you by the annoyance you are setting out to address with your startup. A Likert scale should always be an odd number of results and the standard recommendation is to use five points.
When you analyze your data you can always group together similar responses such as “very likely” with “extremely likely.” Using scales allows respondents to express their true feelings without being expressed into one side of a bi-polar answer. For example, a question like “would you purchase from this website?” does not deserve a yes or no; rather, users should be free to answer with some sort of ambiguity ranging from not at all possible to extremely possible.
5) Test everything. Most of the popular survey tools allow you to add videos, images and text callouts into the questionnaire. Don’t stop with just researching whether customers would buy your product or service. Taking advantage of these multimedia features, you can test your marketing taglines, company names, logos, advertisements, product packaging and virtually everything that goes into a purchase decision.
Advancements in technology allow startups to level the playing field with MNC’s. Large companies have always used focus groups to test new product ideas and marketing, but now anyone with internet access can and should do the same thing. At some point there will be conclusive evidence about whether a certain marketing pitch was effective, but why wait until after the budget is spent when you can do it beforehand? The same goes for landing page testing. Rather than force your designers to narrow down page possibilities to just two for an A/B test, a survey can allow you to figure out which of the final two should be placed into the A/B test.
6) Where to get responses. When doing something as important as validating your product and company ideas, you want to be very certain that your research population is representative of your eventual customer base. If your target market is F&B owners, asking your friends and family for opinions is of no value if they are not F&B owners. Similarly, if your goal is the general population, make sure you aren’t just getting opinions from one office or neighborhood.
There are number of services that can help you gather respondents, but the lowest cost will be from online tools like SurveyMonkey Audience or Google Consumer Surveys that charge $1-$5 per response. When all you are looking to do is validate an idea or concept, you don’t even need that many respondents. As long as there is statistical significance in your data, you can make important decisions with smaller than needed sample sizes.
There are countless books and classes on how to properly do market research. I have barely even grazed the surface of this knowledge with the previous six steps, and I recommend that you try to absorb as many research best practices as you can while validating your startup ideas. Your research respondents are not going to give you endless amounts of opportunities to ask them questions, so it is ideal to get your answers and data without having to repeat the research exercise. Even if you just follow these six steps, you will hopefully have removed enough bias and assumptions to give you qualified answers.
While a lot can go wrong between the time you validate that there is a market need for your solution and the successful creation of a company, nothing guarantees that a company will never get off the ground more than developing a product no one will ever use. Necessity might be the mother of all inventions, but misplaced assumptions need to be banished in order to nurture your innovations to maturity.
Thank you Eli for sharing such insights with us; we are sure it’ll bring some insights to the start-up community!