Tips to Create and Test a Value Hypothesis: A Step-by-Step Guide

Tips to Create and Test a Value Hypothesis: A Step-by-Step Guide

Rapidr

Developing a robust value hypothesis is crucial as you bring a new product to market, guiding your startup toward answering a genuine market need. Constructing a verifiable value hypothesis anchors your product's development process in customer feedback and data-driven insight rather than assumptions.

This framework enables you to clarify the potential value your product offers and provides a foundation for testing and refining your approach, significantly reducing the risk of misalignment with your target market. To set the stage for success, employ logical structures and objective measures, such as creating a minimum viable product, to effectively validate your product's value proposition.

What Is a Verifiable Value Hypothesis?

A verifiable value hypothesis articulates your belief about how your product will deliver value to customers. It is a testable prediction aimed at demonstrating the expected outcomes for your target market.

To ensure that your value hypothesis is verifiable, it should adhere to the following conditions:

  • Specific : Clearly defines the value proposition and the customer segment.
  • Measurable : Includes metrics by which you can assess success or failure.
  • Achievable : Realistic based on your resources and market conditions.
  • Relevant : Directly addresses a significant customer need or desire.
  • Time-Bound : Has a defined period for testing and validation.

When you create a value hypothesis, you're essentially forming the backbone of your business model. It goes beyond a mere assumption and relies on customer feedback data to inform its development. You also safeguard it with objective measures, such as a minimum viable product, to test the hypothesis in real life.

By articulating and examining a verifiable value hypothesis, you understand your product's potential impact and reduce the risk associated with new product development. It's about making informed decisions that increase your confidence in the product's potential success before committing significant resources.

Value Hypotheses vs. Growth Hypotheses

Value hypotheses and growth hypotheses are two distinct concepts often used in business, especially in the context of startups and product development.

Value Hypotheses : A value hypothesis is centered around the product itself. It focuses on whether the product truly delivers customer value. Key questions include whether the product meets a real need, how it compares to alternatives, and if customers are willing to pay for it. Valuing a value hypothesis is crucial before a business scales its operations.

Growth Hypotheses : A growth hypothesis, on the other hand, deals with the scalability and marketing aspects of the business. It involves strategies and channels used to acquire new customers. The focus is on how to grow the customer base, the cost-effectiveness of growth strategies, and the sustainability of growth. Validating a growth hypothesis is typically the next step after confirming that the product has value to the customers.

In practice, both hypotheses are crucial for the success of a business. A value hypothesis ensures the product is desirable and needed, while a growth hypothesis ensures that the product can reach a larger market effectively.

Tips to Create and Test a Verifiable Value Hypothesis

Creating a value hypothesis is crucial for understanding what drives customer interest in your product. It's an educated guess that requires rigor to define and clarity to test. When developing a value hypothesis, you're attempting to validate assumptions about your product's value to customers. Here are concise tips to help you with this process:

1. Understanding Your Market and Customers

Before formulating a hypothesis, you need a deep understanding of your market and potential customers. You're looking to uncover their pain points and needs which your product aims to address.

Begin with thorough market research and collect customer feedback to ensure your idea is built upon a solid foundation of real-world insights. This understanding is pivotal as it sets the tone for a relevant and testable hypothesis.

  • Define Your Value Proposition Clearly: Articulate your product's value to the user. What problem does it solve? How does it improve the user's life or work?
  • Identify Your Target Audience. Determine who your ideal customers are. Understand their needs, pain points, and how they currently address the problem your product intends to solve.

2. Defining Clear Assumptions

The next step is to outline clear assumptions based on your idea that you believe will bring value to your customers. Each assumption should be an assertion that directly relates to how your customers will find your product valuable.

For example, if your product is a task management app, you might assume that the ability to share task lists with team members is a pain point for your potential customers. Remember, assumptions are not facts—they are educated guesses that need verification.

3. Identify Key Metrics for Your Hypothesis Test

Once you've defined your assumptions, delineate the framework for testing your value hypothesis. This involves designing experiments that validate or invalidate your assumptions with measurable outcomes. Ensure that your hypothesis can be tested with measurable outcomes. This could be in the form of user engagement metrics, conversion rates, or customer satisfaction scores.

Determine what success looks like and define objective metrics that will prove your product's value. This could be user engagement, conversion rates, or revenue. Choosing the right metrics is essential for an accurate test. For instance, in your test, you might measure the increase in customer retention or the decrease in time spent on task organization with your app. Construct your test so that the results are unequivocal and actionable.

4. Construct a Testable Proposition

Formulate your hypothesis in a way that can be tested empirically. Use qualitative research methods such as interviews, surveys, and observation to gather data about your potential users. Formulate your value hypothesis based on insights from this research. Plan experiments that can validate or invalidate your value hypothesis. This might involve A/B testing, user testing sessions, or pilot programs.

A good example is to posit that "Introducing feature X will increase user onboarding by Y%." Avoid complexity by testing one variable simultaneously. This helps you identify which changes are actually making a difference.

5. Applying Evidence to Innovation

When your data indicates a promising avenue for product development , it's imperative that you validate your growth hypothesis through experimentation. Align your value proposition with the evidence at hand.

Develop a simplified version of your product that allows you to test the core value proposition with real users without investing in full-scale production. Start by crafting a minimum viable product ( MVP ) to begin testing in the market. This approach helps mitigate risk by not investing heavily in unproven ideas. Use analytics tools to collect data on how users interact with your MVP. Look for patterns that either support or contradict your value hypothesis.

If the data suggests that your value hypothesis is wrong, be prepared to revise your hypothesis or pivot your product strategy accordingly.

6. Gather Customer Feedback

Integrating customer feedback into your product development process can create a more tailored value proposition. This step is crucial in refining your product to meet user needs and validate your hypotheses.

Use customer feedback tools to collect data on how users interact with your MVP. Look for patterns that either support or contradict your value hypothesis. Here are some ways to collect feedback effectively :

  • Feedback portals
  • User testing sessions
  • In-app feedback
  • Website widgets
  • Direct interviews
  • Focus groups
  • Feedback forums

Create a centralized place for product feedback to keep track of different types of customer feedback and improve SaaS products while listening to their customers. Rapidr helps companies be more customer-centric by consolidating feedback across different apps, prioritizing requests, having a discourse with customers, and closing the feedback loop.

creating and testing a demand/value hypothesis

7. Analyze and Iterate Quickly

Review the data and analyze customer feedback to see if it supports your hypothesis. If your hypothesis is not supported, iterate on your assumptions, and test again. Keep a detailed record of your hypotheses, experiments, and findings. This documentation will help you understand the evolution of your product and guide future decision-making.

Use the feedback and data from your tests to make quick iterations of your product and drive product development . This allows you to refine your value proposition and improve the fit with your target audience. Engage with your users throughout the process. Real-world feedback is invaluable and can provide insights that data alone cannot.

  • Identify Patterns : What commonalities are present in the feedback?
  • Implement Changes : Prioritize and make adjustments based on customer insights.

creating and testing a demand/value hypothesis

9. Align with Business Goals and Stay Customer-Focused

Ensure that your value hypothesis aligns with the broader goals of your business. The value provided should ultimately contribute to the success of the company. Remember that the ultimate goal of your value hypothesis is to deliver something that customers find valuable. Maintain a strong focus on customer needs and satisfaction throughout the process.

10. Communicate with Stakeholders and Update them

Keep all stakeholders informed about your findings and the implications for the product. Clear communication helps ensure everyone is aligned and understands the rationale behind product decisions. Communicate and close the feedback loop with the help of a product changelog through which you can ​​announce new changes and engage with customers.

creating and testing a demand/value hypothesis

Understanding and validating a value hypothesis is essential for any business, particularly startups. It involves deeply exploring whether a product or service meets customer needs and offers real value. This process ensures that resources are invested in desirable and useful products, and it's a critical step before considering scalability and growth.

By focusing on the value hypothesis, businesses can better align their offerings with market demand, leading to more sustainable success. Placing customer feedback at the center of the process of testing a value hypothesis helps you develop a product that meets your customers' needs and stands out in the market.

Rapidr helps companies be more customer-centric by consolidating feedback across different apps, prioritizing requests, having a discourse with customers, and closing the feedback loop.

Build better products with user feedback

Rapidr helps SaaS companies understand what customers need through feedback, prioritize what to build next, inform the roadmap, and notify customers on product releases

Rapidr Blog: Customer Led Development & Building Better Products icon

  • Business Essentials
  • Leadership & Management
  • Credential of Leadership, Impact, and Management in Business (CLIMB)
  • Entrepreneurship & Innovation
  • Digital Transformation
  • Finance & Accounting
  • Business in Society
  • For Organizations
  • Support Portal
  • Media Coverage
  • Founding Donors
  • Leadership Team

creating and testing a demand/value hypothesis

  • Harvard Business School →
  • HBS Online →
  • Business Insights →

Business Insights

Harvard Business School Online's Business Insights Blog provides the career insights you need to achieve your goals and gain confidence in your business skills.

  • Career Development
  • Communication
  • Decision-Making
  • Earning Your MBA
  • Negotiation
  • News & Events
  • Productivity
  • Staff Spotlight
  • Student Profiles
  • Work-Life Balance
  • AI Essentials for Business
  • Alternative Investments
  • Business Analytics
  • Business Strategy
  • Business and Climate Change
  • Design Thinking and Innovation
  • Digital Marketing Strategy
  • Disruptive Strategy
  • Economics for Managers
  • Entrepreneurship Essentials
  • Financial Accounting
  • Global Business
  • Launching Tech Ventures
  • Leadership Principles
  • Leadership, Ethics, and Corporate Accountability
  • Leading with Finance
  • Management Essentials
  • Negotiation Mastery
  • Organizational Leadership
  • Power and Influence for Positive Impact
  • Strategy Execution
  • Sustainable Business Strategy
  • Sustainable Investing
  • Winning with Digital Platforms

A Beginner’s Guide to Hypothesis Testing in Business

Business professionals performing hypothesis testing

  • 30 Mar 2021

Becoming a more data-driven decision-maker can bring several benefits to your organization, enabling you to identify new opportunities to pursue and threats to abate. Rather than allowing subjective thinking to guide your business strategy, backing your decisions with data can empower your company to become more innovative and, ultimately, profitable.

If you’re new to data-driven decision-making, you might be wondering how data translates into business strategy. The answer lies in generating a hypothesis and verifying or rejecting it based on what various forms of data tell you.

Below is a look at hypothesis testing and the role it plays in helping businesses become more data-driven.

Access your free e-book today.

What Is Hypothesis Testing?

To understand what hypothesis testing is, it’s important first to understand what a hypothesis is.

A hypothesis or hypothesis statement seeks to explain why something has happened, or what might happen, under certain conditions. It can also be used to understand how different variables relate to each other. Hypotheses are often written as if-then statements; for example, “If this happens, then this will happen.”

Hypothesis testing , then, is a statistical means of testing an assumption stated in a hypothesis. While the specific methodology leveraged depends on the nature of the hypothesis and data available, hypothesis testing typically uses sample data to extrapolate insights about a larger population.

Hypothesis Testing in Business

When it comes to data-driven decision-making, there’s a certain amount of risk that can mislead a professional. This could be due to flawed thinking or observations, incomplete or inaccurate data , or the presence of unknown variables. The danger in this is that, if major strategic decisions are made based on flawed insights, it can lead to wasted resources, missed opportunities, and catastrophic outcomes.

The real value of hypothesis testing in business is that it allows professionals to test their theories and assumptions before putting them into action. This essentially allows an organization to verify its analysis is correct before committing resources to implement a broader strategy.

As one example, consider a company that wishes to launch a new marketing campaign to revitalize sales during a slow period. Doing so could be an incredibly expensive endeavor, depending on the campaign’s size and complexity. The company, therefore, may wish to test the campaign on a smaller scale to understand how it will perform.

In this example, the hypothesis that’s being tested would fall along the lines of: “If the company launches a new marketing campaign, then it will translate into an increase in sales.” It may even be possible to quantify how much of a lift in sales the company expects to see from the effort. Pending the results of the pilot campaign, the business would then know whether it makes sense to roll it out more broadly.

Related: 9 Fundamental Data Science Skills for Business Professionals

Key Considerations for Hypothesis Testing

1. alternative hypothesis and null hypothesis.

In hypothesis testing, the hypothesis that’s being tested is known as the alternative hypothesis . Often, it’s expressed as a correlation or statistical relationship between variables. The null hypothesis , on the other hand, is a statement that’s meant to show there’s no statistical relationship between the variables being tested. It’s typically the exact opposite of whatever is stated in the alternative hypothesis.

For example, consider a company’s leadership team that historically and reliably sees $12 million in monthly revenue. They want to understand if reducing the price of their services will attract more customers and, in turn, increase revenue.

In this case, the alternative hypothesis may take the form of a statement such as: “If we reduce the price of our flagship service by five percent, then we’ll see an increase in sales and realize revenues greater than $12 million in the next month.”

The null hypothesis, on the other hand, would indicate that revenues wouldn’t increase from the base of $12 million, or might even decrease.

Check out the video below about the difference between an alternative and a null hypothesis, and subscribe to our YouTube channel for more explainer content.

2. Significance Level and P-Value

Statistically speaking, if you were to run the same scenario 100 times, you’d likely receive somewhat different results each time. If you were to plot these results in a distribution plot, you’d see the most likely outcome is at the tallest point in the graph, with less likely outcomes falling to the right and left of that point.

distribution plot graph

With this in mind, imagine you’ve completed your hypothesis test and have your results, which indicate there may be a correlation between the variables you were testing. To understand your results' significance, you’ll need to identify a p-value for the test, which helps note how confident you are in the test results.

In statistics, the p-value depicts the probability that, assuming the null hypothesis is correct, you might still observe results that are at least as extreme as the results of your hypothesis test. The smaller the p-value, the more likely the alternative hypothesis is correct, and the greater the significance of your results.

3. One-Sided vs. Two-Sided Testing

When it’s time to test your hypothesis, it’s important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests , or one-tailed and two-tailed tests, respectively.

Typically, you’d leverage a one-sided test when you have a strong conviction about the direction of change you expect to see due to your hypothesis test. You’d leverage a two-sided test when you’re less confident in the direction of change.

Business Analytics | Become a data-driven leader | Learn More

4. Sampling

To perform hypothesis testing in the first place, you need to collect a sample of data to be analyzed. Depending on the question you’re seeking to answer or investigate, you might collect samples through surveys, observational studies, or experiments.

A survey involves asking a series of questions to a random population sample and recording self-reported responses.

Observational studies involve a researcher observing a sample population and collecting data as it occurs naturally, without intervention.

Finally, an experiment involves dividing a sample into multiple groups, one of which acts as the control group. For each non-control group, the variable being studied is manipulated to determine how the data collected differs from that of the control group.

A Beginner's Guide to Data and Analytics | Access Your Free E-Book | Download Now

Learn How to Perform Hypothesis Testing

Hypothesis testing is a complex process involving different moving pieces that can allow an organization to effectively leverage its data and inform strategic decisions.

If you’re interested in better understanding hypothesis testing and the role it can play within your organization, one option is to complete a course that focuses on the process. Doing so can lay the statistical and analytical foundation you need to succeed.

Do you want to learn more about hypothesis testing? Explore Business Analytics —one of our online business essentials courses —and download our Beginner’s Guide to Data & Analytics .

creating and testing a demand/value hypothesis

About the Author

12 min read

Value Hypothesis 101: A Product Manager's Guide

Talk to sales.

Humans make assumptions every day—it’s our brain’s way of making sense of the world around us, but assumptions are only valuable if they're verifiable . That’s where a value hypothesis comes in as your starting point.

A good hypothesis goes a step beyond an assumption. It’s a verifiable and validated guess based on the value your product brings to your real-life customers. When you verify your hypothesis, you confirm that the product has real-world value, thus you have a higher chance of product success. 

What Is a Verifiable Value Hypothesis?

A value hypothesis is an educated guess about the value proposition of your product. When you verify your hypothesis , you're using evidence to prove that your assumption is correct. A hypothesis is verifiable if it does not prove false through experimentation or is shown to have rational justification through data, experiments, observation, or tests. 

The most significant benefit of verifying a hypothesis is that it helps you avoid product failure and helps you build your product to your customers’ (and potential customers’) needs. 

Verifying your assumptions is all about collecting data. Without data obtained through experiments, observations, or tests, your hypothesis is unverifiable, and you can’t be sure there will be a market need for your product. 

A Verifiable Value Hypothesis Minimizes Risk and Saves Money

When you verify your hypothesis, you’re less likely to release a product that doesn’t meet customer expectations—a waste of your company’s resources. Harvard Business School explains that verifying a business hypothesis “...allows an organization to verify its analysis is correct before committing resources to implement a broader strategy.” 

If you verify your hypothesis upfront, you’ll lower risk and have time to work out product issues. 

UserVoice Validation makes product validation accessible to everyone. Consider using its research feature to speed up your hypothesis verification process. 

Value Hypotheses vs. Growth Hypotheses 

Your value hypothesis focuses on the value of your product to customers. This type of hypothesis can apply to a product or company and is a building block of product-market fit . 

A growth hypothesis is a guess at how your business idea may develop in the long term based on how potential customers may find your product. It’s meant for estimating business model growth rather than individual products. 

Because your value hypothesis is really the foundation for your growth hypothesis, you should focus on value hypothesis tests first and complete growth hypothesis tests to estimate business growth as a whole once you have a viable product.

4 Tips to Create and Test a Verifiable Value Hypothesis

A verifiable hypothesis needs to be based on a logical structure, customer feedback data , and objective safeguards like creating a minimum viable product. Validating your value significantly reduces risk . You can prevent wasting money, time, and resources by verifying your hypothesis in early-stage development. 

A good value hypothesis utilizes a framework (like the template below), data, and checks/balances to avoid bias. 

1. Use a Template to Structure Your Value Hypothesis 

By using a template structure, you can create an educated guess that includes the most important elements of a hypothesis—the who, what, where, when, and why. If you don’t structure your hypothesis correctly, you may only end up with a flimsy or leap-of-faith assumption that you can’t verify. 

A true hypothesis uses a few guesses about your product and organizes them so that you can verify or falsify your assumptions. Using a template to structure your hypothesis can ensure that you’re not missing the specifics.

You can’t just throw a hypothesis together and think it will answer the question of whether your product is valuable or not. If you do, you could end up with faulty data informed by bias , a skewed significance level from polling the wrong people, or only a vague idea of what your customer would actually pay for your product. 

A template will help keep your hypothesis on track by standardizing the structure of the hypothesis so that each new hypothesis always includes the specifics of your client personas, the cost of your product, and client or customer pain points. 

A value hypothesis template might look like: 

[Client] will spend [cost] to purchase and use our [title of product/service] to solve their [specific problem] OR help them overcome [specific obstacle]. 

An example of your hypothesis might look like: 

B2B startups will spend $500/mo to purchase our resource planning software to solve resource over-allocation and employee burnout.

By organizing your ideas and the important elements (who, what, where, when, and why), you can come up with a hypothesis that actually answers the question of whether your product is useful and valuable to your ideal customer. 

2. Turn Customer Feedback into Data to Support Your Hypothesis  

Once you have your hypothesis, it’s time to figure out whether it’s true—or, more accurately, prove that it’s valid. Since a hypothesis is never considered “100% proven,” it’s referred to as either valid or invalid based on the information you discover in your experiments or tests. Additionally, your results could lead to an alternative hypothesis, which is helpful in refining your core idea.

To support value hypothesis testing, you need data. To do that, you'll want to collect customer feedback . A customer feedback management tool can also make it easier for your team to access the feedback and create strategies to implement or improve customer concerns. 

If you find that potential clients are not expressing pain points that could be solved with your product or you’re not seeing an interest in the features you hope to add, you can adjust your hypothesis and absorb a lower risk. Because you didn’t invest a lot of time and money into creating the product yet, you should have more resources to put toward the product once you work out the kinks. 

On the other hand, if you find that customers are requesting features your product offers or pain points your product could solve, then you can move forward with product development, confident that your future customers will value (and spend money on) the product you’re creating. 

A customer feedback management tool like UserVoice can empower you to challenge assumptions from your colleagues (often based on anecdotal information) which find their way into team decision making . Having data to reevaluate an assumption helps with prioritization, and it confirms that you’re focusing on the right things as an organization.

3. Validate Your Product 

Since you have a clear idea of who your ideal customer is at this point and have verified their need for your product, it’s time to validate your product and decide if it’s better than your competitors’. 

At this point, simply asking your customers if they would buy your product (or spend more on your product) instead of a competitor’s isn’t enough confirmation that you should move forward, and customers may be biased or reluctant to provide critical feedback. 

Instead, create a minimum viable product (MVP). An MVP is a working, bare-bones version of the product that you can test out without risking your whole budget. Hypothesis testing with an MVP simulates the product experience for customers and, based on their actions and usage, validates that the full product will generate revenue and be successful.  

If you take the steps to first verify and then validate your hypothesis using data, your product is more likely to do well. Your focus will be on the aspect that matters most—whether your customer actually wants and would invest money in purchasing the product.

4. Use Safeguards to Remain Objective 

One of the pitfalls of believing in your product and attempting to validate it is that you’re subject to confirmation bias . Because you want your product to succeed, you may pay more attention to the answers in the collected data that affirm the value of your product and gloss over the information that may lead you to conclude that your hypothesis is actually false. Confirmation bias could easily cloud your vision or skew your metrics without you even realizing it. 

Since it’s hard to know when you’re engaging in confirmation bias, it’s good to have safeguards in place to keep you in check and aligned with the purpose of objectively evaluating your value hypothesis. 

Safeguards include sharing your findings with third-party experts or simply putting yourself in the customer’s shoes.

Third-party experts are the business version of seeking a peer review. External parties don’t stand to benefit from the outcome of your verification and validation process, so your work is verified and validated objectively. You gain the benefit of knowing whether your hypothesis is valid in the eyes of the people who aren’t stakeholders without the risk of confirmation bias. 

In addition to seeking out objective minds, look into potential counter-arguments , such as customer objections (explicit or imagined). What might your customer think about investing the time to learn how to use your product? Will they think the value is commensurate with the monetary cost of the product? 

When running an experiment on validating your hypothesis, it’s important not to elevate the importance of your beliefs over the objective data you collect. While it can be exciting to push for the validity of your idea, it can lead to false assumptions and the permission of weak evidence. 

Validation Is the Key to Product Success

With your new value hypothesis in hand, you can confidently move forward, knowing that there’s a true need, desire, and market for your product.

Because you’ve verified and validated your guesses, there’s less of a chance that you’re wrong about the value of your product, and there are fewer financial and resource risks for your company. With this strong foundation and the new information you’ve uncovered about your customers, you can add even more value to your product or use it to make more products that fit the market and user needs. 

If you think customer feedback management software would be useful in your hypothesis validation process, consider opting into our free trial to see how UserVoice can help.

Heather Tipton

Start your free trial.

creating and testing a demand/value hypothesis

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Business LibreTexts

7.1: Introduction to Hypothesis Testing

  • Last updated
  • Save as PDF
  • Page ID 79051

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Now we are down to the bread and butter work of the statistician: developing and testing hypotheses. It is important to put this material in a broader context so that the method by which a hypothesis is formed is understood completely. Using textbook examples often clouds the real source of statistical hypotheses.

Statistical testing is part of a much larger process known as the scientific method. This method was developed more than two centuries ago as the accepted way that new knowledge could be created. Until then, and unfortunately even today, among some, "knowledge" could be created simply by some authority saying something was so, ipso dicta . Superstition and conspiracy theories were (are?) accepted uncritically.

This is a picture of a Dalmation dog covered in black spots. He is wearing a red color, appears to be in a nature setting, and there is a spout of water from a water fountain in the foreground.

Figure \(\PageIndex{1}\) You can use a hypothesis test to decide if a dog breeder’s claim that every Dalmatian has 35 spots is statistically sound. (Credit: Robert Neff)

The scientific method, briefly, states that only by following a careful and specific process can some assertion be included in the accepted body of knowledge. This process begins with a set of assumptions upon which a theory, sometimes called a model, is built. This theory, if it has any validity, will lead to predictions; what we call hypotheses.

As an example, in microeconomics the theory of consumer choice begins with certain assumptions concerning human behavior. From these assumptions followed a theory of how consumers make choices using indifference curves and the budget line. This theory gave rise to a very important prediction; namely, that there was an inverse relationship between price and quantity demanded. This relationship was known as the demand curve. The negative slope of the demand curve is really just a prediction, or a hypothesis, that can be tested with statistical tools.

Unless hundreds and hundreds of statistical tests of this hypothesis had not confirmed this relationship, the so-called Law of Demand would have been discarded years ago. This is the role of statistics, to test the hypotheses of various theories to determine if they should be admitted into the accepted body of knowledge, and how we understand our world. Once admitted, however, they may be later discarded if new theories come along that make better predictions.

Not long ago two scientists claimed that they could get more energy out of a process than was put in. This caused a tremendous stir for obvious reasons. They were on the cover of Time and were offered extravagant sums to bring their research work to private industry and any number of universities. It was not long until their work was subjected to the rigorous tests of the scientific method and found to be a failure. No other lab could replicate their findings. Consequently they have sunk into obscurity and their theory discarded. It may surface again when someone can pass the tests of the hypotheses required by the scientific method, but until then it is just a curiosity. Many pure frauds have been attempted over time, but most have been found out by applying the process of the scientific method.

This discussion is meant to show just where in this process statistics falls. Statistics and statisticians are not necessarily in the business of developing theories, but in the business of testing others' theories. Hypotheses come from these theories based upon an explicit set of assumptions and sound logic. The hypothesis comes first, before any data are gathered. Data do not create hypotheses; they are used to test them. If we bear this in mind as we study this section the process of forming and testing hypotheses will make more sense.

One job of a statistician is to make statistical inferences about populations based on samples taken from the population. Confidence intervals are one way to estimate a population parameter. Another way to make a statistical inference is to make a decision about the value of a specific parameter. For instance, a car dealer advertises that its new small truck gets 35 miles per gallon, on average. A tutoring service claims that its method of tutoring helps 90% of its students get an A or a B. A company says that women managers in their company earn an average of $60,000 per year.

A statistician will make a decision about these claims. This process is called " hypothesis testing ". A hypothesis test involves collecting data from a sample and evaluating the data. Then, the statistician makes a decision as to whether or not there is sufficient evidence, based upon analyses of the data, to reject the null hypothesis.

In this chapter, you will conduct hypothesis tests on single means and single proportions. You will also learn about the errors associated with these tests.

creating and testing a demand/value hypothesis

How to test your idea: start with the most critical hypotheses

To validate business ideas you need to perform many small experiments. At the centre of any one of these experiments should be a deep understanding of the most critical hypotheses and why you are testing them.

Build_measure_learn_strategyzer_hypothesis

In the world of Lean Startup, the Build, Measure, Learn cycle is a means to an end to test the attractiveness of business ideas. Unfortunately, some innovators and entrepreneurs take the “Build” step too literally and immediately start building prototypes. However, at the centre of this cycle there is actually a step zero: shaping your idea and defining the most critical assumptions and hypotheses underlying it (Note: I’ll be interchanging between assumptions and hypotheses throughout the rest of the post).

Step 0 - think (& hypothesize)

Shape your idea (product, tech, market opportunity, etc.) into an attractive customer value proposition and prototype a potential profitable and scalable business model. Use the Value Proposition & Business Model Canvas to do this. Then ask: What are the critical assumptions and hypotheses that need to be true for this to work. Define assumptions as to desirability (market risk: will customers want it?); feasibility (tech & implementation risk: can I build/execute it?); and viability (financial risk: can I earn more money from it than it will cost me to build?). To test these assumptions/hypotheses you will perform many, many experiments. With your hypotheses mapped out, you can now start to move through the steps of the Build, Measure, Learn cycle:

Step 1 - build

In this step you design and build the experiments that are best suited to test your assumptions? Ask: Which hypothesis will we test first and how? Ask: Which tests will yield the most valuable data and evidence? ‍

Step 2 - measure

In this step you actually perform the experiments. That might be through interviews and talking to a series of customers and stakeholders; by launching a landing page to see if people click on, sign up for, or even buy your (non existing because it’s not yet implemented) value proposition. ‍

Step 3 - learn

In this step you analyze the data and gain insights. You systematically connect the evidence and data from experiments back to the initial hypotheses, how you tested them, and what you learned. This is where you identify if your initial hypotheses were right, wrong, or still unclear. You might learn that you have to reshape your idea, to pivot, to create new hypotheses, to continue testing, or you might prove with evidence that your idea has legs and you’re on the right rack. At the centre of all testing should always be a deep understanding of the critical hypotheses underlying how you intend to create value for customers (Value Proposition Canvas) and how you hope to create value for your company (Business Model Canvas). I’ve seen too many innovators and entrepreneurs get lost in building experiments, but losing sight of their initial hypotheses and the ultimate prize. At the end there’s only one thing that counts: Are you making progress in turning your initial idea into a profitable and scalable business model that creates value for customers?

About the speakers

Dr. Alexander (Alex) Osterwalder is one of the world’s most influential innovation experts, a leading author, entrepreneur and in-demand speaker whose work has changed the way established companies do business and how new ventures get started.

Download your free copy of this whitepaper now

Explore other examples, get strategyzer updates straight in your inbox.

Team member avatar

SkillsYouNeed

  • NUMERACY SKILLS
  • Developing and Testing Hypotheses

Search SkillsYouNeed:

Numeracy Skills:

  • A - Z List of Numeracy Skills
  • How Good Are Your Numeracy Skills? Numeracy Quiz
  • Money Management and Financial Skills
  • Real-World Maths
  • Numbers | An Introduction
  • Special Numbers and Mathematical Concepts
  • Systems of Measurement
  • Common Mathematical Symbols and Terminology
  • Apps to Help with Maths
  • Subtraction -
  • Multiplication ×
  • Positive and Negative Numbers
  • Ordering Mathematical Operations - BODMAS
  • Mental Arithmetic – Basic Mental Maths Hacks
  • Ratio and Proportion
  • Percentages %
  • Percentage Calculators
  • Percentage Change | Increase and Decrease
  • Calculating with Time
  • Estimation, Approximation and Rounding
  • Introduction to Geometry: Points, Lines and Planes
  • Introduction to Cartesian Coordinate Systems
  • Polar, Cylindrical and Spherical Coordinates
  • Properties of Polygons
  • Simple Transformations of 2-Dimensional Shapes
  • Circles and Curved Shapes
  • Perimeter and Circumference
  • Calculating Area
  • Three-Dimensional Shapes
  • Net Diagrams of 3D Shapes
  • Calculating Volume
  • Area, Surface Area and Volume Reference Sheet
  • Graphs and Charts
  • Averages (Mean, Median & Mode)
  • Simple Statistical Analysis
  • Statistical Analysis: Types of Data
  • Understanding Correlations
  • Understanding Statistical Distributions
  • Significance and Confidence Intervals
  • Multivariate Analysis
  • Introduction to Algebra
  • Simultaneous and Quadratic Equations
  • Introduction to Trigonometry
  • Introduction to Probability

Subscribe to our FREE newsletter and start improving your life in just 5 minutes a day.

You'll get our 5 free 'One Minute Life Skills' and our weekly newsletter.

We'll never share your email address and you can unsubscribe at any time.

Statistical Analysis: Developing and Testing Hypotheses

Statistical hypothesis testing is sometimes known as confirmatory data analysis. It is a way of drawing inferences from data. In the process, you develop a hypothesis or theory about what you might see in your research. You then test that hypothesis against the data that you collect.

Hypothesis testing is generally used when you want to compare two groups, or compare a group against an idealised position.

Before You Start: Developing A Research Hypothesis

Before you can do any kind of research in social science fields such as management, you need a research question or hypothesis. Research is generally designed to either answer a research question or consider a research hypothesis . These two are closely linked, and generally one or the other is used, rather than both.

A research question is the question that your research sets out to answer . For example:

Do men and women like ice cream equally?

Do men and women like the same flavours of ice cream?

What are the main problems in the market for ice cream?

How can the market for ice cream be segmented and targeted?

Research hypotheses are statements of what you believe you will find in your research.

These are then tested statistically during the research to see if your belief is correct. Examples include:

Men and women like ice cream to different extents.

Men and women like different flavours of ice cream.

Men are more likely than women to like mint ice cream.

Women are more likely than men to like chocolate ice cream.

Both men and women prefer strawberry to vanilla ice cream.

Relationships vs Differences

Research hypotheses can be expressed in terms of differences between groups, or relationships between variables. However, these are two sides of the same coin: almost any hypothesis could be set out in either way.

For example:

There is a relationship between gender and liking ice cream OR

Men are more likely to like ice cream than women.

Testing Research Hypotheses

The purpose of statistical hypothesis testing is to use a sample to draw inferences about a population.

Testing research hypotheses requires a number of steps:

Step 1. Define your research hypothesis

The first step in any hypothesis testing is to identify your hypothesis, which you will then go on to test. How you define your hypothesis may affect the type of statistical testing that you do, so it is important to be clear about it. In particular, consider whether you are going to hypothesise simply that there is a relationship or speculate about the direction of the relationship.

Using the examples above:

There is a relationship between gender and liking ice cream is a non-directional hypothesis. You have simply specified that there is a relationship, not whether men or women like ice cream more.

However, men are more likely to like ice cream than women is directional : you have specified which gender is more likely to like ice cream.

Generally, it is better not to specify direction unless you are moderately sure about it.

Step 2. Define the null hypothesis

The null hypothesis is basically a statement of what you are hoping to disprove: the opposite of your ‘guess’ about the relationship. For example, in the hypotheses above, the null hypothesis would be:

Men and women like ice cream equally, or

There is no relationship between gender and ice cream.

This also defines your ‘alternative hypothesis’ which is your ‘test hypothesis’ ( men like ice cream more than women ). Your null hypothesis is generally that there is no difference, because this is the simplest position.

The purpose of hypothesis testing is to disprove the null hypothesis. If you cannot disprove the null hypothesis, you have to assume it is correct.

Step 3. Develop a summary measure that describes your variable of interest for each group you wish to compare

Our page on Simple Statistical Analysis describes several summary measures, including two of the most common, mean and median.

The next step in your hypothesis testing is to develop a summary measure for each of your groups. For example, to test the gender differences in liking for ice cream, you might ask people how much they liked ice cream on a scale of 1 to 5. Alternatively, you might have data about the number of times that ice creams are consumed each week in the summer months.

You then need to produce a summary measure for each group, usually mean and standard deviation. These may be similar for each group, or quite different.

Step 4. Choose a reference distribution and calculate a test statistic

To decide whether there is a genuine difference between the two groups, you have to use a reference distribution against which to measure the values from the two groups.

The most common source of reference distributions is a standard distribution such as the normal distribution or t - distribution. These two are the same, except that the standard deviation of the t -distribution is estimated from the sample, and that of the normal distribution is known. There is more about this in our page on Statistical Distributions .

You then compare the summary data from the two groups by using them to calculate a test statistic. There is a standard formula for every test statistic and reference distribution. The test and reference distribution depend on your data and the purpose of your testing (see below).

The test that you use to compare your groups will depend on how many groups you have, the type of data that you have collected, and how reliable your data are. In general, you would use different tests for comparing two groups than you would for comparing three or more.

Our page Surveys and Survey Design explains that there are two types of answer scale, continuous and categorical. Age, for example, is a continuous scale, although it can also be grouped into categories. You may also find it helpful to read our page on Types of Data .

Gender is a category scale.

For a continuous scale, you can use the mean values of the two groups that you are comparing.

For a category scale, you need to use the median values.

Source: Easterby-Smith, Thorpe and Jackson, Management Research 4th Edition

One- or Two-Tailed Test

The other thing that you have to decide is whether you use what is known as a ‘one-tailed’ or ‘two-tailed’ test.

This allows you to compare differences between groups in either one or both directions.

In practice, this boils down to whether your research hypothesis is expressed as ‘x is likely to be more than y’, or ‘x is likely to be different from y’. If you are confident of the direction of the distance (that is, you are sure that the only options are that ‘x is likely to be more than y’ or ‘x and y are the same’), then your test will be one-tailed. If not, it will be two-tailed .

If there is any doubt, it is better to use a two-tailed test.

You should only use a one-tailed test when you are certain about the direction of the difference, and it doesn’t matter if you are wrong.

The graph under Step 5 shows a two-tailed test.

If you are not very confident about the quality of the data collected, for example because the inputting was done quickly and cheaply, or because the data have not been checked, then you may prefer to use the median  even if the data are continuous  to avoid any problems with outliers. This makes the tests more robust, and the results more reliable.

Our page on correlations suggests that you may also want to plot a scattergraph before undertaking any further analysis. This will also help you to identify any outliers or potential problems with the data.

Calculating the Test Statistic

For each type of test, there is a standard formula for the test statistic. For example, for the t -test, it is:

(M1-M2)/SE(diff)

M1 is the mean of the first group

M2 is the mean of the second group

SE(diff) is the standard error of the difference, which is calculated from the standard deviation and the sample size of each group.

The formula for calculating the standard error of the difference between means is:

  • sd 2 = the square of the standard deviation of the source population (i.e., the variance);
  • n a = the size of sample A; and
  • n b = the size of sample B.

Step 5. Identify Acceptance and Rejection Regions

The final part of the test is to see if your test statistic is significant—in other words, whether you are going to accept or reject your null hypothesis. You need to consider first what level of significance is required. This tells you the probability that you have achieved your result by chance.

Significance (or p-value) is usually required to be either 5% or 1%, meaning that you are 95% or 99% confident that your result was not achieved by chance.

NOTE:  the significance level is sometimes expressed as  p  < 0.05 or  p  < 0.01.

For more about significance, you may like to read our page on Significance and Confidence Intervals .

The graph below shows a reference distribution (this one could be either the normal or the t- distribution) with the acceptance and rejection regions marked. It also shows the critical values. µ is the mean. For more about this, you may like to read our page on Statistical Distributions .

Reference distribution showing acceptance and rejection regions, critical values and mean.

The critical values are identified from published statistical tables for your reference distribution, which are available for different levels of significance.

If your test statistic falls within either of the two rejection regions (that is, it is greater than the higher critical value, or less than the lower one), you will reject the null hypothesis. You can therefore accept your alternative hypothesis.

Step 6. Draw Conclusions and Inferences

The final step is to draw conclusions.

If your test statistic fell within the rejection region, and you have rejected the null hypothesis, you can therefore conclude that there is a gender difference in liking for ice cream, using the example above.

Types of Error

There are four possible outcomes from statistical testing (see table):

The groups are different, and you conclude that they are different (correct result)

The groups are different, but you conclude that they are not (Type II error)

The groups are the same, but you conclude that they are different (Type I error)

The groups are the same, and you conclude that they are the same (correct result).

Type I errors are generally considered more important than Type II, because they have the potential to change the status quo.

For example, if you wrongly conclude that a new medical treatment is effective, doctors are likely to move to providing that treatment. Patients may receive the treatment instead of an alternative that could have fewer side effects, and pharmaceutical companies may stop looking for an alternative treatment.

Data Handling and Algebra - The Skills You Need Guide to Numeracy

Further Reading from Skills You Need

Data Handling and Algebra Part of The Skills You Need Guide to Numeracy

This eBook covers the basics of data handling, data visualisation, basic statistical analysis and algebra. The book contains plenty of worked examples to improve understanding as well as real-world examples to show you how these concepts are useful.

Whether you want to brush up on your basics, or help your children with their learning, this is the book for you.

There are statistical software packages available that will carry out all these tests for you. However, if you have never studied statistics, and you’re not very confident about what you’re doing, you are probably best off discussing it with a statistician or consulting a detailed statistical textbook.

Poorly executed statistical analysis can invalidate very good research.  It is much better to find someone to help you. However, this page will help you to understand your friendly statistician!

Continue to: Significance and Confidence Intervals Statistical Analysis: Types of Data

See also: Understanding Correlations Understanding Statistical Distributions Averages (Mean, Median and Mode)

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Statistics LibreTexts

8.1: Steps in Hypothesis Testing

  • Last updated
  • Save as PDF
  • Page ID 10970

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

CHAPTER OBJECTIVES

By the end of this chapter, the student should be able to:

  • Differentiate between Type I and Type II Errors
  • Describe hypothesis testing in general and in practice
  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation known.
  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation unknown.
  • Conduct and interpret hypothesis tests for a single population proportion

One job of a statistician is to make statistical inferences about populations based on samples taken from the population. Confidence intervals are one way to estimate a population parameter. Another way to make a statistical inference is to make a decision about a parameter. For instance, a car dealer advertises that its new small truck gets 35 miles per gallon, on average. A tutoring service claims that its method of tutoring helps 90% of its students get an A or a B. A company says that women managers in their company earn an average of $60,000 per year.

CNX_Stats_C09_CO.jpg

A statistician will make a decision about these claims. This process is called "hypothesis testing." A hypothesis test involves collecting data from a sample and evaluating the data. Then, the statistician makes a decision as to whether or not there is sufficient evidence, based upon analysis of the data, to reject the null hypothesis. In this chapter, you will conduct hypothesis tests on single means and single proportions. You will also learn about the errors associated with these tests.

Hypothesis testing consists of two contradictory hypotheses or statements, a decision based on the data, and a conclusion. To perform a hypothesis test, a statistician will:

  • Set up two contradictory hypotheses.
  • Collect sample data (in homework problems, the data or summary statistics will be given to you).
  • Determine the correct distribution to perform the hypothesis test.
  • Analyze sample data by performing the calculations that ultimately will allow you to reject or decline to reject the null hypothesis.
  • Make a decision and write a meaningful conclusion.

To do the hypothesis test homework problems for this chapter and later chapters, make copies of the appropriate special solution sheets. See Appendix E .

  • The desired confidence level.
  • Information that is known about the distribution (for example, known standard deviation).
  • The sample and its size.
  • Comprehensive Learning Paths
  • 150+ Hours of Videos
  • Complete Access to Jupyter notebooks, Datasets, References.

Rating

Hypothesis Testing – A Deep Dive into Hypothesis Testing, The Backbone of Statistical Inference

  • September 21, 2023

Explore the intricacies of hypothesis testing, a cornerstone of statistical analysis. Dive into methods, interpretations, and applications for making data-driven decisions.

creating and testing a demand/value hypothesis

In this Blog post we will learn:

  • What is Hypothesis Testing?
  • Steps in Hypothesis Testing 2.1. Set up Hypotheses: Null and Alternative 2.2. Choose a Significance Level (α) 2.3. Calculate a test statistic and P-Value 2.4. Make a Decision
  • Example : Testing a new drug.
  • Example in python

1. What is Hypothesis Testing?

In simple terms, hypothesis testing is a method used to make decisions or inferences about population parameters based on sample data. Imagine being handed a dice and asked if it’s biased. By rolling it a few times and analyzing the outcomes, you’d be engaging in the essence of hypothesis testing.

Think of hypothesis testing as the scientific method of the statistics world. Suppose you hear claims like “This new drug works wonders!” or “Our new website design boosts sales.” How do you know if these statements hold water? Enter hypothesis testing.

2. Steps in Hypothesis Testing

  • Set up Hypotheses : Begin with a null hypothesis (H0) and an alternative hypothesis (Ha).
  • Choose a Significance Level (α) : Typically 0.05, this is the probability of rejecting the null hypothesis when it’s actually true. Think of it as the chance of accusing an innocent person.
  • Calculate Test statistic and P-Value : Gather evidence (data) and calculate a test statistic.
  • p-value : This is the probability of observing the data, given that the null hypothesis is true. A small p-value (typically ≤ 0.05) suggests the data is inconsistent with the null hypothesis.
  • Decision Rule : If the p-value is less than or equal to α, you reject the null hypothesis in favor of the alternative.

2.1. Set up Hypotheses: Null and Alternative

Before diving into testing, we must formulate hypotheses. The null hypothesis (H0) represents the default assumption, while the alternative hypothesis (H1) challenges it.

For instance, in drug testing, H0 : “The new drug is no better than the existing one,” H1 : “The new drug is superior .”

2.2. Choose a Significance Level (α)

When You collect and analyze data to test H0 and H1 hypotheses. Based on your analysis, you decide whether to reject the null hypothesis in favor of the alternative, or fail to reject / Accept the null hypothesis.

The significance level, often denoted by $α$, represents the probability of rejecting the null hypothesis when it is actually true.

In other words, it’s the risk you’re willing to take of making a Type I error (false positive).

Type I Error (False Positive) :

  • Symbolized by the Greek letter alpha (α).
  • Occurs when you incorrectly reject a true null hypothesis . In other words, you conclude that there is an effect or difference when, in reality, there isn’t.
  • The probability of making a Type I error is denoted by the significance level of a test. Commonly, tests are conducted at the 0.05 significance level , which means there’s a 5% chance of making a Type I error .
  • Commonly used significance levels are 0.01, 0.05, and 0.10, but the choice depends on the context of the study and the level of risk one is willing to accept.

Example : If a drug is not effective (truth), but a clinical trial incorrectly concludes that it is effective (based on the sample data), then a Type I error has occurred.

Type II Error (False Negative) :

  • Symbolized by the Greek letter beta (β).
  • Occurs when you accept a false null hypothesis . This means you conclude there is no effect or difference when, in reality, there is.
  • The probability of making a Type II error is denoted by β. The power of a test (1 – β) represents the probability of correctly rejecting a false null hypothesis.

Example : If a drug is effective (truth), but a clinical trial incorrectly concludes that it is not effective (based on the sample data), then a Type II error has occurred.

Balancing the Errors :

creating and testing a demand/value hypothesis

In practice, there’s a trade-off between Type I and Type II errors. Reducing the risk of one typically increases the risk of the other. For example, if you want to decrease the probability of a Type I error (by setting a lower significance level), you might increase the probability of a Type II error unless you compensate by collecting more data or making other adjustments.

It’s essential to understand the consequences of both types of errors in any given context. In some situations, a Type I error might be more severe, while in others, a Type II error might be of greater concern. This understanding guides researchers in designing their experiments and choosing appropriate significance levels.

2.3. Calculate a test statistic and P-Value

Test statistic : A test statistic is a single number that helps us understand how far our sample data is from what we’d expect under a null hypothesis (a basic assumption we’re trying to test against). Generally, the larger the test statistic, the more evidence we have against our null hypothesis. It helps us decide whether the differences we observe in our data are due to random chance or if there’s an actual effect.

P-value : The P-value tells us how likely we would get our observed results (or something more extreme) if the null hypothesis were true. It’s a value between 0 and 1. – A smaller P-value (typically below 0.05) means that the observation is rare under the null hypothesis, so we might reject the null hypothesis. – A larger P-value suggests that what we observed could easily happen by random chance, so we might not reject the null hypothesis.

2.4. Make a Decision

Relationship between $α$ and P-Value

When conducting a hypothesis test:

We then calculate the p-value from our sample data and the test statistic.

Finally, we compare the p-value to our chosen $α$:

  • If $p−value≤α$: We reject the null hypothesis in favor of the alternative hypothesis. The result is said to be statistically significant.
  • If $p−value>α$: We fail to reject the null hypothesis. There isn’t enough statistical evidence to support the alternative hypothesis.

3. Example : Testing a new drug.

Imagine we are investigating whether a new drug is effective at treating headaches faster than drug B.

Setting Up the Experiment : You gather 100 people who suffer from headaches. Half of them (50 people) are given the new drug (let’s call this the ‘Drug Group’), and the other half are given a sugar pill, which doesn’t contain any medication.

  • Set up Hypotheses : Before starting, you make a prediction:
  • Null Hypothesis (H0): The new drug has no effect. Any difference in healing time between the two groups is just due to random chance.
  • Alternative Hypothesis (H1): The new drug does have an effect. The difference in healing time between the two groups is significant and not just by chance.

Calculate Test statistic and P-Value : After the experiment, you analyze the data. The “test statistic” is a number that helps you understand the difference between the two groups in terms of standard units.

For instance, let’s say:

  • The average healing time in the Drug Group is 2 hours.
  • The average healing time in the Placebo Group is 3 hours.

The test statistic helps you understand how significant this 1-hour difference is. If the groups are large and the spread of healing times in each group is small, then this difference might be significant. But if there’s a huge variation in healing times, the 1-hour difference might not be so special.

Imagine the P-value as answering this question: “If the new drug had NO real effect, what’s the probability that I’d see a difference as extreme (or more extreme) as the one I found, just by random chance?”

For instance:

  • P-value of 0.01 means there’s a 1% chance that the observed difference (or a more extreme difference) would occur if the drug had no effect. That’s pretty rare, so we might consider the drug effective.
  • P-value of 0.5 means there’s a 50% chance you’d see this difference just by chance. That’s pretty high, so we might not be convinced the drug is doing much.
  • If the P-value is less than ($α$) 0.05: the results are “statistically significant,” and they might reject the null hypothesis , believing the new drug has an effect.
  • If the P-value is greater than ($α$) 0.05: the results are not statistically significant, and they don’t reject the null hypothesis , remaining unsure if the drug has a genuine effect.

4. Example in python

For simplicity, let’s say we’re using a t-test (common for comparing means). Let’s dive into Python:

Making a Decision : “The results are statistically significant! p-value < 0.05 , The drug seems to have an effect!” If not, we’d say, “Looks like the drug isn’t as miraculous as we thought.”

5. Conclusion

Hypothesis testing is an indispensable tool in data science, allowing us to make data-driven decisions with confidence. By understanding its principles, conducting tests properly, and considering real-world applications, you can harness the power of hypothesis testing to unlock valuable insights from your data.

More Articles

Correlation – connecting the dots, the role of correlation in data analysis, sampling and sampling distributions – a comprehensive guide on sampling and sampling distributions, law of large numbers – a deep dive into the world of statistics, central limit theorem – a deep dive into central limit theorem and its significance in statistics, skewness and kurtosis – peaks and tails, understanding data through skewness and kurtosis”, similar articles, complete introduction to linear regression in r, how to implement common statistical significance tests and find the p value, logistic regression – a complete tutorial with examples in r.

Subscribe to Machine Learning Plus for high value data science content

© Machinelearningplus. All rights reserved.

creating and testing a demand/value hypothesis

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free sample videos:.

creating and testing a demand/value hypothesis

You are launched - Home

Blog » Value Hypothesis & Growth Hypothesis: lean startup validation

Value Hypothesis & Growth Hypothesis: lean startup validation

Posted on September 16, 2021 |

You’ve come up with a fantastic idea for a startup and you need to discuss the hypothesis and its value? But you’re not sure if it’s a viable one or not. What do you do next? It’s essential to get your ideas right before you start developing them. 95% of new products fail in their first year of launch. Or to put it another way, only one in twenty product ideas succeed. In this article, we’ll be taking a look at why it’s so important to validate your startup idea before you start spending a lot of time and money developing it. And that’s where the Lean Startup Validation process gets into, alongside the growth hypothesis and value hypothesis. We’ll also be looking at the questions that you need to ask.

Table of contents

The lean startup validation methodology, the benefits of validating your startup idea, the value hypothesis, the growth hypothesis, recommendations and questions for creating and running a good hypothesis, in conclusion – take the time to validate your product.

What does it mean to validate a lean startup? urlaunched. you are launched. what is a value hypothesis

What does it mean to validate a lean startup?

Validating your lean startup idea may sound like a complicated process, but it’s a lot simpler than you may think. It may be the case that you were already planning on carrying out some of the work.

Essentially, validating your startup when you check your idea to see if it solves a problem that your prospective customers have. You can do this by creating hypotheses and then carrying out research to see if these hypotheses are true or false. 

The best startups have always been about finding a gap in the market and offering a product or service that solves the problem. For example, take Airbnb . Before Airbnb launched, people only had the option of staying in hotels. Airbnb opened up the hospitality industry, offering cheaper accommodation to people who could not afford to stay inexpensive hotels. 

The lean startup methodology. Persona hypothesis. Problem hypothesis. Value hypothesis. Usability hypothesis. Growth hypothesis

“Don’t be in a rush to get big. Be in a rush to have a great product” – Eric Ries

Validation is a crucial part of the lean startup methodology, which was devised by entrepreneur Eric Ries. The lean startup methodology is all about optimizing the amount of time that is needed to ensure a product or service is viable. 

Lean Startup Validation is a critical part of the lean startup process as it helps make sure that an idea will be successful before time is spent developing the final product.

As an example of a failed idea where more validation could have helped, take Google Glass . It sounded like a good idea on paper, but the technology failed spectacularly. Customer research would have shown that $1,500 was too much money, that people were worried about health and safety, and most importantly… there was no apparent benefit to the product.

Find out more about lean startup methodology on our blog

How to create a mobile app using lean startup methodology

The key benefit of validating your lean startup idea is to make sure that the idea you have is a viable one before you start using resources to build and promote it. 

There are other less obvious benefits too:

  • It can help you fine-tune your idea. So, it may be the case that you wanted your idea to go in a particular direction, but user research shows that pivoting may be the best thing to do
  • It can help you get funding. Investors may be more likely to invest in your startup idea if you have evidence that your idea is a viable one

The value hypothesis and the growth hypothesis – are two ways to validate your idea

“To grow a successful business, validate your idea with customers” – Chad Boyda

In Eric Rie’s book ‘ The Lean Startup’ , he identifies two different types of hypotheses that entrepreneurs can use to validate their startup idea – the growth hypothesis and the value hypothesis. 

Let’s look at the two different ideas, how they compare, and how you can use them to see if your startup idea could work.

value hypothesis and growth hypothesis. Lean startup validation.

The value hypothesis tests whether your product or service provides customers with enough value and most importantly, whether they are prepared to pay for this value.

For example, let’s say that you want to develop a mobile app to help dog owners find people to help walk their dogs while they are at work. Before you start spending serious time and money developing the app, you’ll want to see if it is something of interest to your target audience. 

Your value hypothesis could say, “we believe that 60% of dog owners aged between 30 and 40 would be willing to pay upwards of €10 a month for this service.”

You then find dog owners in this age range and ask them the question. You’re pleased to see that 75% say that they would be willing to pay this amount! Your hypothesis has worked! This means that you should focus your app and your advertising on this target audience. 

If the data comes back and says your prospective target audience isn’t willing to pay, then it means you have to rethink and reframe your app before running another hypothesis. For example, you may want to focus on another demographic, or look at reducing the price of the subscription.

Shoe retailer Zappos used a value hypothesis when starting out. Founder Nick Swinmurn went to local shoe stores, taking photos of the shoes and posting them on the Zappos website. Then, if customers bought the shoes, he’d buy them from the store and send them out to them. This allowed him to see if there was interest in his website, without having to spend lots of money on stock.

Lean startup validation. The growth hypothesis. Value & growth assumptions

The growth hypothesis tests how your customers will find your product or service and shows how your potential product could grow over the years.

Let’s go back to the dog-walking app we talked about earlier. You think that 80% of app downloads will come from word-of-mouth recommendations.

You create a minimal viable product ( MVP for short ) – this is a basic version of your app that may not contain all of the features just yet. So, you then upload it to the app stores and wait for people to start downloading it. When you have a baseline of customers, you send them an email asking them how they heard of your app.

When the feedback comes back, it shows that only 30% of downloads have come from word-of-mouth recommendations. This means that your growth hypothesis has not been successful in this scenario. 

Does this mean that your idea is a bad one? Not necessarily. It just means that you may have to look at other ways of promoting your app. If you are relying on word-of-mouth recommendations to advertise it, then it could potentially fail.

Dropbox used growth hypotheses to its advantage when creating its software. The file-storage company constantly tweaked its website, running A/B tests to see which features and changes were most popular with customers, using them in the final product.

Recommendations and questions for creating and running a good hypothesis. Passion led us here. lean startup validation. Value & growth assumptions

Like any good science experiment, there are things that you need to bear in mind when running your hypotheses. Here are our recommendations:

  • You may be wondering which type of hypothesis you should carry out first – a growth hypothesis or a value hypothesis. Eric Ries recommends carrying out a value hypothesis first, as it makes sense to see if there is interest before seeing how many people are interested. However, the precise order may depend on the type of product or service you want to sell;
  • You will probably need to run multiple hypotheses to validate your product or service. If you do this, be sure to only test one hypothesis at a time. If you end up testing multiple ones in one go, you may not be sure which hypothesis has had which result;
  • Test your most critical assumption first – this is one that you are most worried about, and could affect your idea the most. It may be that solving this issue makes your product or service a viable one;
  • Specific – is your hypothesis simple? If it’s jumbled or confusing, you’re not going to get the best results from it. If you’re struggling to put together a clear hypothesis, it’s probably a sign to go back to the drawing board.
  • Measurable – can your hypothesis be measured? You’ll want to get tangible results so you can check if the changes you have made have worked.
  • Achievable – is your hypothesis attainable? If not, you may want to break it down into smaller goals.
  • Relevant – will your hypothesis prove the validity of your product or service? 
  • Timely – can your hypothesis be measured in a set amount of time? You don’t want a goal that will take years to monitor and measure!
  • Be as critical as possible. If you have created an idea, it is only natural that you want it to succeed. However, being objective rather than subjective will help your startup most in the long term;
  • When you are carrying out customer research, use as vast a pool of people as time and money will allow. This will result in more accurate data. The great news is that you can use social media and other networking sites to reach out to potential customers and ask them their opinions;
  • When carrying out customer research, be sure to ask the questions that matter. Bear in mind that liking your product or service isn’t the same as buying it. If a customer is enthusiastic about your idea, be sure to ask follow-on questions about why they like it, or if they would be willing to spend money on it. Otherwise, your data may end up being useless;
  • While it is essential to have as many relevant hypotheses as possible, be careful not to have too many.  While it may sound like a good idea to try out lots of different ideas, it can actually be counter-productive. As Eric Ries said:

“Don’t bog new teams down with too much information about falsifiable hypotheses. Because if we load our teams up with too much theory, they can easily get stuck in analysis paralysis. I’ve worked with teams that have come up with hundreds of leap-of-faith assumptions. They listed so many assumptions that were so detailed and complicated that they couldn’t decide what to do next. They were paralyzed by the just sheer quantity of the list.”

In conclusion – take the time to validate your product. lean startup validation.

“We must learn what customers really want, not what they say they want or what we think they should want.” – Eric Ries

According to CB Insights , the number one reason why startups fail is that there is no demand for the product. Many entrepreneurs have gone ahead and launched a product that they think people want, only to find that there is no market at all.

Lean Startup Validation is essential in helping your business idea to succeed. While it may seem like extra work, the additional work you do in the beginning will be of a critical advantage later down the line.

Still not 100% convinced? Take HubSpot . Before HubSpot launched its sales and marketing services, it started off as a blog. Co-founders Dharmesh Shah and Brian Halligan used this blog to validate their ideas and see what their visitors wanted. This helped them confirm that their concept was on the right lines and meant they could launch a product that people actually wanted to use.

Validating a startup idea before development is crucial because it ensures that the idea is viable and addresses a real problem that customers have. With a high failure rate of new products, validation helps avoid wasting time and resources on ideas that might not succeed.

The value hypothesis tests whether customers find enough value in a product or service to pay for it. The growth hypothesis examines how customers will discover and adopt the product over time. Both hypotheses are essential for validating the viability of a startup idea.

Eric Ries recommends starting with a value hypothesis before a growth hypothesis. Validating whether the idea provides value is crucial before considering how to promote and grow it.

When creating and running a hypothesis, consider the following: 1. Focus on testing one hypothesis at a time. 2. Test your most critical assumptions first. 3. Ensure your hypothesis follows SMART goals (Specific, Measurable, Achievable, Relevant, Timely). 4. Use a wide pool of potential customers for accurate data. 5. Ask relevant and probing questions during customer research. 6. Avoid overwhelming your team with excessive hypotheses.

Validating your product idea before development helps you avoid the top reason for startup failure—lack of demand for the product. By confirming that there is a market need and interest in your idea, you increase the chances of building a successful product.

Lean Startup Validation helps entrepreneurs avoid the mistake of launching a product that doesn’t address a genuine need. By gathering evidence and feedback early, you can make informed decisions about pivoting or refining your idea before investing significant time and resources.

Certainly. Suppose you’re developing a mobile app for dog owners to find dog-walking services. Your value hypothesis could be: “We believe that 60% of dog owners aged between 30 and 40 would be willing to pay upwards of €10 a month for this service.” You then validate this hypothesis by surveying dog owners in that age range and analyzing their responses.

The growth hypothesis examines how customers will discover and adopt your product. If, for example, you expect 80% of app downloads to come from word-of-mouth recommendations, but feedback shows only 30% are from this source, you may need to reevaluate your promotion strategy.

Yes, Lean Startup Validation can be applied to startups across various industries. Whether you’re offering a product or service, the process of testing hypotheses and gathering evidence applies universally to ensure the viability of your idea.

To gather accurate data, focus on reaching a diverse pool of potential customers through various channels, including social media and networking sites. Ask relevant questions about their preferences, willingness to pay, and potential pain points related to your idea

Being critical and objective during validation helps you avoid confirmation bias and wishful thinking. Objectivity allows you to assess whether your idea truly addresses a problem and resonates with customers, ensuring that your startup’s foundation is built on solid evidence.

Launching Startups that get Success Stories

Contact us:

Quick links

© 2016 - 2024 URLAUNCHED LTD. All Rights Reserved

Mobile Menu

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

HDD & More from Me

Hypothesis-Driven Development (Practitioner’s Guide)

Table of Contents

What is hypothesis-driven development (HDD)?

How do you know if it’s working, how do you apply hdd to ‘continuous design’, how do you apply hdd to application development, how do you apply hdd to continuous delivery, how does hdd relate to agile, design thinking, lean startup, etc..

Like agile, hypothesis-driven development (HDD) is more a point of view with various associated practices than it is a single, particular practice or process. That said, my goal here for is you to leave with a solid understanding of how to do HDD and a specific set of steps that work for you to get started.

After reading this guide and trying out the related practice you will be able to:

  • Diagnose when and where hypothesis-driven development (HDD) makes sense for your team
  • Apply techniques from HDD to your work in small, success-based batches across your product pipeline
  • Frame and enhance your existing practices (where applicable) with HDD

Does your product program feel like a Netflix show you’d binge watch? Is your team excited to see what happens when you release stuff? If so, congratulations- you’re already doing it and please hit me up on Twitter so we can talk about it! If not, don’t worry- that’s pretty normal, but HDD offers some awesome opportunities to work better.

Scientific-Method

Building on the scientific method, HDD is a take on how to integrate test-driven approaches across your product development activities- everything from creating a user persona to figuring out which integration tests to automate. Yeah- wow, right?! It is a great way to energize and focus your practice of agile and your work in general.

By product pipeline, I mean the set of processes you and your team undertake to go from a certain set of product priorities to released product. If you’re doing agile, then iteration (sprints) is a big part of making these work.

Product-Pipeline-Cowan.001

It wouldn’t be very hypothesis-driven if I didn’t have an answer to that! In the diagram above, you’ll find metrics for each area. For your application of HDD to what we’ll call continuous design, your metric to improve is the ratio of all your release content to the release content that meets or exceeds your target metrics on user behavior. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? For application development, the metric you’re working to improve is basically velocity, meaning story points or, generally, release content per sprint. For continuous delivery, it’s how often you can release. Hypothesis testing is, of course, central to HDD and generally doing agile with any kind focus on valuable outcomes, and I think it shares the metric on successful release content with continuous design.

creating and testing a demand/value hypothesis

The first component is team cost, which you would sum up over whatever period you’re measuring. This includes ‘c $ ’, which is total compensation as well as loading (benefits, equipment, etc.) as well as ‘g’ which is the cost of the gear you use- that might be application infrastructure like AWS, GCP, etc. along with any other infrastructure you buy or share with other teams. For example, using a backend-as-a-service like Heroku or Firebase might push up your value for ‘g’ while deferring the cost of building your own app infrastructure.

The next component is release content, fe. If you’re already estimating story points somehow, you can use those. If you’re a NoEstimates crew, and, hey, I get it, then you’d need to do some kind of rough proportional sizing of your release content for the period in question. The next term, r f , is optional but this is an estimate of the time you’re having to invest in rework, bug fixes, manual testing, manual deployment, and anything else that doesn’t go as planned.

The last term, s d , is one of the most critical and is an estimate of the proportion of your release content that’s successful relative to the success metrics you set for it. For example, if you developed a new, additional way for users to search for products and set the success threshold at it being used in >10% of users sessions, did that feature succeed or fail by that measure? Naturally, if you’re not doing this it will require some work and changing your habits, but it’s hard to deliver value in agile if you don’t know what that means and define it against anything other than actual user behavior.

Here’s how some of the key terms lay out in the product pipeline:

creating and testing a demand/value hypothesis

The example here shows how a team might tabulate this for a given month:

creating and testing a demand/value hypothesis

Is the punchline that you should be shooting for a cost of $1,742 per story point? No. First, this is for a single month and would only serve the purpose of the team setting a baseline for itself. Like any agile practice, the interesting part of this is seeing how your value for ‘F’ changes from period to period, using your team retrospectives to talk about how to improve it. Second, this is just a single team and the economic value (ex: revenue) related to a given story point will vary enormously from product to product. There’s a Google Sheets-based calculator that you can use here: Innovation Accounting with ‘F’ .

Like any metric, ‘F’ only matters if you find it workable to get in the habit of measuring it and paying attention to it. As a team, say, evaluates its progress on OKR (objectives and key results), ‘F’ offers a view on the health of the team’s collaboration together in the context of their product and organization. For example, if the team’s accruing technical debt, that will show up as a steady increase in ‘F’. If a team’s invested in test or deploy automation or started testing their release content with users more specifically, that should show up as a steady lowering of ‘F’.

In the next few sections, we’ll step through how to apply HDD to your product pipeline by area, starting with continuous design.

pipeline-continuous-design

It’s a mistake to ask your designer to explain every little thing they’re doing, but it’s also a mistake to decouple their work from your product’s economics. On the one hand, no one likes someone looking over their shoulder and you may not have the professional training to reasonably understand what they’re doing hour to hour, even day to day. On the other hand, it’s a mistake not to charter a designer’s work without a testable definition of success and not to collaborate around that.

Managing this is hard since most of us aren’t designers and because it takes a lot of work and attention to detail to work out what you really want to achieve with a given design.

Beginning with the End in Mind

The difference between art and design is intention- in design we always have one and, in practice, it should be testable. For this, I like the practice of customer experience (CX) mapping. CX mapping is a process for focusing the work of a team on outcomes–day to day, week to week, and quarter to quarter. It’s amenable to both qualitative and quantitative evidence but it is strictly focused on observed customer behaviors, as opposed to less direct, more lagging observations.

CX mapping works to define the CX in testable terms that are amenable to both qualitative and quantitative evidence. Specifically for each phase of a potential customer getting to behaviors that accrue to your product/market fit (customer funnel), it answers the following questions:

1. What do we mean by this phase of the customer funnel? 

What do we mean by, say, ‘Acquisition’ for this product or individual feature? How would we know it if we see it?

2. How do we observe this (in quantitative terms)? What’s the DV?

This come next after we answer the question “What does this mean?”. The goal is to come up with a focal single metric (maybe two), a ‘dependent variable’ (DV) that tells you how a customer has behaved in a given phase of the CX (ex: Acquisition, Onboarding, etc.).

3. What is the cut off for a transition?

Not super exciting, but extremely important in actual practice, the idea here is to establish the cutoff for deciding whether a user has progressed from one phase to the next or abandoned/churned.

4. What is our ‘Line in the Sand’ threshold?

Popularized by the book ‘Lean Analytics’, the idea here is that good metrics are ones that change a team’s behavior (decisions) and for that you need to establish a threshold in advance for decision making.

5. How might we test this? What new IVs are worth testing?

The ‘independent variables’ (IV’s) you might test are basically just ideas for improving the DV (#2 above).

6. What’s tricky? What do we need to watch out for?

Getting this working will take some tuning, but it’s infinitely doable and there aren’t a lot of good substitutes for focusing on what’s a win and what’s a waste of time.

The image below shows a working CX map for a company (HVAC in a Hurry) that services commercial heating, ventilation, and air-conditioning systems. And this particular CX map is for the specific ‘job’/task/problem of how their field technicians get the replacement parts they need.

CX-Map-Full-HinH

For more on CX mapping you can also check out it’s page- Tutorial: Customer Experience (CX) Mapping .

Unpacking Continuous Design for HDD

For the unpacking the work of design/Continuous Design with HDD , I like to use the ‘double diamond’ framing of ‘right problem’ vs. ‘right solution’, which I first learned about in Donald Norman’s seminal book, ‘The Design of Everyday Things’.

I’ve organized the balance of this section around three big questions:

How do you test that you’ve found the ‘Right Problem’?

How do you test that you’ve found demand and have the ‘right solution’, how do you test that you’ve designed the ‘right solution’.

hdd+design-thinking-UX

Let’s say it’s an internal project- a ‘digital transformation’ for an HVAC (heating, ventilation, and air conditioning) service company. The digital team thinks it would be cool to organize the documentation for all the different HVAC equipment the company’s technicians service. But, would it be?

The only way to find out is to go out and talk to these technicians and find out! First, you need to test whether you’re talking to someone who is one of these technicians. For example, you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

Second, you need to ask non-leading questions. The evidentiary value of a specific answer to a general question is much higher than a specific answer to a specific questions. Also, some questions are just leading. For example, if you ask such a subject ‘Would you use a documentation system if we built it?’, they’re going to say yes, just to avoid the awkwardness and sales pitch they expect if they say no.

How do you draft personas? Much more renowned designers than myself (Donald Norman among them) disagree with me about this, but personally I like to draft my personas while I’m creating my interview guide and before I do my first set of interviews. Whether you draft or interview first is also of secondary important if you’re doing HDD- if you’re not iteratively interviewing and revising your material based on what you’ve found, it’s not going to be very functional anyway.

Really, the persona (and the jobs-to-be-done) is a means to an end- it should be answering some facet of the question ‘Who is our customer, and what’s important to them?’. It’s iterative, with a process that looks something like this:

personas-process-v3

How do you draft jobs-to-be-done? Personally- I like to work these in a similar fashion- draft, interview, revise, and then repeat, repeat, repeat.

You’ll use the same interview guide and subjects for these. The template is the same as the personas, but I maintain a separate (though related) tutorial for these–

A guide on creating Jobs-to-be-Done (JTBD) A template for drafting jobs-to-be-done (JTBD)

How do you interview subjects? And, action! The #1 place I see teams struggle is at the beginning and it’s with the paradox that to get to a big market you need to nail a series of small markets. Sure, they might have heard something about segmentation in a marketing class, but here you need to apply that from the very beginning.

The fix is to create a screener for each persona. This is a factual question whose job is specifically and only to determine whether a given subject does or does not map to your target persona. In the HVAC in a Hurry technician persona (see above), you might have a screening question like: ‘How many HVAC’s did you repair last week?’. If it’s <10,  you might instead be talking to a handyman or a manager (or someone who’s not an HVAC tech at all).

And this is the point where (if I’ve made them comfortable enough to be candid with me) teams will ask me ‘But we want to go big- be the next Facebook.’ And then we talk about how just about all those success stories where there’s a product that has for all intents and purpose a universal user base started out by killing it in small, specific segments and learning and growing from there.

Sorry for all that, reader, but I find all this so frequently at this point and it’s so crucial to what I think is a healthy practice of HDD it seemed necessary.

The key with the interview guide is to start with general questions where you’re testing for a specific answer and then progressively get into more specific questions. Here are some resources–

An example interview guide related to the previous tutorials A general take on these interviews in the context of a larger customer discovery/design research program A template for drafting an interview guide

To recap, what’s a ‘Right Problem’ hypothesis? The Right Problem (persona and PS/JTBD) hypothesis is the most fundamental, but the hardest to pin down. You should know what kind of shoes your customer wears and when and why they use your product. You should be able to apply factual screeners to identify subjects that map to your persona or personas.

You should know what people who look like/behave like your customer who don’t use your product are doing instead, particularly if you’re in an industry undergoing change. You should be analyzing your quantitative data with strong, specific, emphatic hypotheses.

If you make software for HVAC (heating, ventilation and air conditioning) technicians, you should have a decent idea of what you’re likely to hear if you ask such a person a question like ‘What are the top 5 hardest things about finishing an HVAC repair?’

In summary, HDD here looks something like this:

Persona-Hypothesis

01 IDEA : The working idea is that you know your customer and you’re solving a problem/doing a job (whatever term feels like it fits for you) that is important to them. If this isn’t the case, everything else you’re going to do isn’t going to matter.

Also, you know the top alternatives, which may or may not be what you see as your direct competitors. This is important as an input into focused testing demand to see if you have the Right Solution.

02 HYPOTHESIS : If you ask non-leading questions (like ‘What are the top 5 hardest things about finishing an HVAC repair?’), then you should generally hear relatively similar responses.

03 EXPERIMENTAL DESIGN : You’ll want an Interview Guide and, critically, a screener. This is a factual question you can use to make sure any given subject maps to your persona. With the HVAC repair example, this would be something like ‘How many HVAC repairs have you done in the last week?’ where you’re expecting an answer >5. This is important because if your screener isn’t tight enough, your interview responses may not converge.

04 EXPERIMENTATION : Get out and interview some subjects- but with a screener and an interview guide. The resources above has more on this, but one key thing to remember is that the interview guide is a guide, not a questionnaire. Your job is to make the interaction as normal as possible and it’s perfectly OK to skip questions or change them. It’s also 1000% OK to revise your interview guide during the process.

05: PIVOT OR PERSEVERE : What did you learn? Was it consistent? Good results are: a) We didn’t know what was on their A-list and what alternatives they are using, but we do know. b) We knew what was on their A-list and what alternatives they are using- we were pretty much right (doesn’t happen as much as you’d think). c) Our interviews just didn’t work/converge. Let’s try this again with some changes (happens all the time to smart teams and is very healthy).

By this, I mean: How do you test whether you have demand for your proposition? How do you know whether it’s better enough at solving a problem (doing a job, etc.) than the current alternatives your target persona has available to them now?

If an existing team was going to pick one of these areas to start with, I’d pick this one. While they’ll waste time if they haven’t found the right problem to solve and, yes, usability does matter, in practice this area of HDD is a good forcing function for really finding out what the team knows vs. doesn’t. This is why I show it as a kind of fulcrum between Right Problem and Right Solution:

Right-Solution-VP

This is not about usability and it does not involve showing someone a prototype, asking them if they like it, and checking the box.

Lean Startup offers a body of practice that’s an excellent fit for this. However, it’s widely misused because it’s so much more fun to build stuff than to test whether or not anyone cares about your idea. Yeah, seriously- that is the central challenge of Lean Startup.

Here’s the exciting part: You can massively improve your odds of success. While Lean Startup does not claim to be able to take any idea and make it successful, it does claim to minimize waste- and that matters a lot. Let’s just say that a new product or feature has a 1 in 5 chance of being successful. Using Lean Startup, you can iterate through 5 ideas in the space it would take you to build 1 out (and hope for the best)- this makes the improbably probable which is pretty much the most you can ask for in the innovation game .

Build, measure, learn, right? Kind of. I’ll harp on this since it’s important and a common failure mode relate to Lean Startup: an MVP is not a 1.0. As the Lean Startup folks (and Eric Ries’ book) will tell you, the right order is learn, build, measure. Specifically–

Learn: Who your customer is and what matters to them (see Solving the Right Problem, above). If you don’t do this, you’ll throwing darts with your eyes closed. Those darts are a lot cheaper than the darts you’d throw if you were building out the solution all the way (to strain the metaphor some), but far from free.

In particular, I see lots of teams run an MVP experiment and get confusing, inconsistent results. Most of the time, this is because they don’t have a screener and they’re putting the MVP in front of an audience that’s too wide ranging. A grandmother is going to respond differently than a millennial to the same thing.

Build : An experiment, not a real product, if at all possible (and it almost always is). Then consider MVP archetypes (see below) that will deliver the best results and try them out. You’ll likely have to iterate on the experiment itself some, particularly if it’s your first go.

Measure : Have metrics and link them to a kill decision. The Lean Startup term is ‘pivot or persevere’, which is great and makes perfect sense, but in practice the pivot/kill decisions are hard and as you decision your experiment you should really think about what metrics and thresholds are really going to convince you.

How do you code an MVP? You don’t. This MVP is a means to running an experiment to test motivation- so formulate your experiment first and then figure out an MVP that will get you the best results with the least amount of time and money. Just since this is a practitioner’s guide, with regard to ‘time’, that’s both time you’ll have to invest as well as how long the experiment will take to conclude. I’ve seen them both matter.

The most important first step is just to start with a simple hypothesis about your idea, and I like the form of ‘If we [do something] for [a specific customer/persona], then they will [respond in a specific, observable way that we can measure]. For example, if you’re building an app for parents to manage allowances for their children, it would be something like ‘If we offer parents and app to manage their kids’ allowances, they will download it, try it, make a habit of using it, and pay for a subscription.’

All that said, for getting started here is- A guide on testing with Lean Startup A template for creating motivation/demand experiments

To recap, what’s a Right Solution hypothesis for testing demand? The core hypothesis is that you have a value proposition that’s better enough than the target persona’s current alternatives that you’re going to acquire customers.

As you may notice, this creates a tight linkage with your testing from Solving the Right Problem. This is important because while testing value propositions with Lean Startup is way cheaper than building product, it still takes work and you can only run a finite set of tests. So, before you do this kind of testing I highly recommend you’ve iterated to validated learning on the what you see below: a persona, one or more PS/JTBD, the alternatives they’re using, and a testable view of why your VP is going to displace those alternatives. With that, your odds of doing quality work in this area dramatically increase!

trent-value-proposition.001

What’s the testing, then? Well, it looks something like this:

creating and testing a demand/value hypothesis

01 IDEA : Most practicing scientists will tell you that the best way to get a good experimental result is to start with a strong hypothesis. Validating that you have the Right Problem and know what alternatives you’re competing against is critical to making investments in this kind of testing yield valuable results.

With that, you have a nice clear view of what alternative you’re trying to see if you’re better than.

02 HYPOTHESIS : I like a cause an effect stated here, like: ‘If we [offer something to said persona], they will [react in some observable way].’ This really helps focus your work on the MVP.

03 EXPERIMENTAL DESIGN : The MVP is a means to enable an experiment. It’s important to have a clear, explicit declaration of that hypothesis and for the MVP to delivery a metric for which you will (in advance) decide on a fail threshold. Most teams find it easier to kill an idea decisively with a kill metric vs. a success metric, even though they’re literally different sides of the same threshold.

04 EXPERIMENTATION : It is OK to tweak the parameters some as you run the experiment. For example, if you’re running a Google AdWords test, feel free to try new and different keyword phrases.

05: PIVOT OR PERSEVERE : Did you end up above or below your fail threshold? If below, pivot and focus on something else. If above, great- what is the next step to scaling up this proposition?

How does this related to usability? What’s usability vs. motivation? You might reasonably wonder: If my MVP has something that’s hard to understand, won’t that affect the results? Yes, sure. Testing for usability and the related tasks of building stuff are much more fun and (short-term) gratifying. I can’t emphasize enough how much harder it is for most founders, etc. is to push themselves to focus on motivation.

There’s certainly a relationship and, as we transition to the next section on usability, it seems like a good time to introduce the relationship between motivation and usability. My favorite tool for this is BJ Fogg’s Fogg Curve, which appears below. On the y-axis is motivation and on the x-axis is ‘ability’, the inverse of usability. If you imagine a point in the upper left, that would be, say, a cure for cancer where no matter if it’s hard to deal with you really want. On the bottom right would be something like checking Facebook- you may not be super motivated but it’s so easy.

The punchline is that there’s certainly a relationship but beware that for most of us our natural bias is to neglect testing our hypotheses about motivation in favor of testing usability.

Fogg-Curve

First and foremost, delivering great usability is a team sport. Without a strong, co-created narrative, your performance is going to be sub-par. This means your developers, testers, analysts should be asking lots of hard, inconvenient (but relevant) questions about the user stories. For more on how these fit into an overall design program, let’s zoom out and we’ll again stand on the shoulders of Donald Norman.

Usability and User Cognition

To unpack usability in a coherent, testable fashion, I like to use Donald Norman’s 7-step model of user cognition:

user-cognition

The process starts with a Goal and that goals interacts with an object in an environment, the ‘World’. With the concepts we’ve been using here, the Goal is equivalent to a job-to-be-done. The World is your application in whatever circumstances your customer will use it (in a cubicle, on a plane, etc.).

The Reflective layer is where the customer is making a decision about alternatives for their JTBD/PS. In his seminal book, The Design of Everyday Things, Donald Normal’s is to continue reading a book as the sun goes down. In the framings we’ve been using, we looked at understanding your customers Goals/JTBD in ‘How do you test that you’ve found the ‘right problem’?’, and we looked evaluating their alternatives relative to your own (proposition) in ‘How do you test that you’ve found the ‘right solution’?’.

The Behavioral layer is where the user interacts with your application to get what they want- hopefully engaging with interface patterns they know so well they barely have to think about it. This is what we’ll focus on in this section. Critical here is leading with strong narrative (user stories), pairing those with well-understood (by your persona) interface patterns, and then iterating through qualitative and quantitative testing.

The Visceral layer is the lower level visual cues that a user gets- in the design world this is a lot about good visual design and even more about visual consistency. We’re not going to look at that in depth here, but if you haven’t already I’d make sure you have a working style guide to ensure consistency (see  Creating a Style Guide ).

How do you unpack the UX Stack for Testability? Back to our example company, HVAC in a Hurry, which services commercial heating, ventilation, and A/C systems, let’s say we’ve arrived at the following tested learnings for Trent the Technician:

As we look at how we’ll iterate to the right solution in terms of usability, let’s say we arrive at the following user story we want to unpack (this would be one of many, even just for the PS/JTBD above):

As Trent the Technician, I know the part number and I want to find it on the system, so that I can find out its price and availability.

Let’s step through the 7 steps above in the context of HDD, with a particular focus on achieving strong usability.

1. Goal This is the PS/JTBD: Getting replacement parts to a job site. An HDD-enabled team would have found this out by doing customer discovery interviews with subjects they’ve screened and validated to be relevant to the target persona. They would have asked non-leading questions like ‘What are the top five hardest things about finishing an HVAC repair?’ and consistently heard that one such thing is sorting our replacement parts. This validates the PS/JTBD hypothesis that said PS/JTBD matters.

2. Plan For the PS/JTBD/Goal, which alternative are they likely to select? Is our proposition better enough than the alternatives? This is where Lean Startup and demand/motivation testing is critical. This is where we focused in ‘How do you test that you’ve found the ‘right solution’?’ and the HVAC in a Hurry team might have run a series of MVP to both understand how their subject might interact with a solution (concierge MVP) as well as whether they’re likely to engage (Smoke Test MVP).

3. Specify Our first step here is just to think through what the user expects to do and how we can make that as natural as possible. This is where drafting testable user stories, looking at comp’s, and then pairing clickable prototypes with iterative usability testing is critical. Following that, make sure your analytics are answering the same questions but at scale and with the observations available.

4. Perform If you did a good job in Specify and there are not overt visual problems (like ‘Can I click this part of the interface?’), you’ll be fine here.

5. Perceive We’re at the bottom of the stack and looping back up from World: Is the feedback from your application readily apparent to the user? For example, if you turn a switch for a lightbulb, you know if it worked or not. Is your user testing delivering similar clarity on user reactions?

6. Interpret Do they understand what they’re seeing? Does is make sense relative to what they expected to happen. For example, if the user just clicked ‘Save’, do they’re know that whatever they wanted to save is saved and OK? Or not?

7. Compare Have you delivered your target VP? Did they get what they wanted relative to the Goal/PS/JTBD?

How do you draft relevant, focused, testable user stories? Without these, everything else is on a shaky foundation. Sometimes, things will work out. Other times, they won’t. And it won’t be that clear why/not. Also, getting in the habit of pushing yourself on the relevance and testability of each little detail will make you a much better designer and a much better steward of where and why your team invests in building software.

For getting started here is- A guide on creating user stories A template for drafting user stories

How do you create find the relevant patterns and apply them? Once you’ve got great narrative, it’s time to put the best-understood, most expected, most relevant interface patterns in front of your user. Getting there is a process.

For getting started here is- A guide on interface patterns and prototyping

How do you run qualitative user testing early and often? Once you’ve got great something to test, it’s time to get that design in front of a user, give them a prompt, and see what happens- then rinse and repeat with your design.

For getting started here is- A guide on qualitative usability testing A template for testing your user stories

How do you focus your outcomes and instrument actionable observation? Once you release product (features, etc.) into the wild, it’s important to make sure you’re always closing the loop with analytics that are a regular part of your agile cadences. For example, in a high-functioning practice of HDD the team should be interested in and  reviewing focused analytics to see how their pair with the results of their qualitative usability testing.

For getting started here is- A guide on quantitative usability testing with Google Analytics .

To recap, what’s a Right Solution hypothesis for usability? Essentially, the usability hypothesis is that you’ve arrived at a high-performing UI pattern that minimizes the cognitive load, maximizes the user’s ability to act on their motivation to connect with your proposition.

Right-Solution-Usability-Hypothesis

01 IDEA : If you’re writing good user stories , you already have your ideas implemented in the form of testable hypotheses. Stay focused and use these to anchor your testing. You’re not trying to test what color drop-down works best- you’re testing which affordances best deliver on a given user story.

02 HYPOTHESIS : Basically, the hypothesis is that ‘For [x] user story, this interface pattern will perform will, assuming we supply the relevant motivation and have the right assessments in place.

03 EXPERIMENTAL DESIGN : Really, this means have a tests set up that, beyond working, links user stories to prompts and narrative which supply motivation and have discernible assessments that help you make sure the subject didn’t click in the wrong place by mistake.

04 EXPERIMENTATION : It is OK to iterate on your prototypes and even your test plan in between sessions, particularly at the exploratory stages.

05: PIVOT OR PERSEVERE : Did the patterns perform well, or is it worth reviewing patterns and comparables and giving it another go?

There’s a lot of great material and successful practice on the engineering management part of application development. But should you pair program? Do estimates or go NoEstimates? None of these are the right choice for every team all of the time. In this sense, HDD is the only way to reliably drive up your velocity, or f e . What I love about agile is that fundamental to its design is the coupling and integration of working out how to make your release content successful while you’re figuring out how to make your team more successful.

What does HDD have to offer application development, then? First, I think it’s useful to consider how well HDD integrates with agile in this sense and what existing habits you can borrow from it to improve your practice of HDD. For example, let’s say your team is used to doing weekly retrospectives about its practice of agile. That’s the obvious place to start introducing a retrospective on how your hypothesis testing went and deciding what that should mean for the next sprint’s backlog.

Second, let’s look at the linkage from continuous design. Primarily, what we’re looking to do is move fewer designs into development through more disciplined experimentation before we invest in development. This leaves the developers the do things better and keep the pipeline healthier (faster and able to produce more content or story points per sprint). We’d do this by making sure we’re dealing with a user that exists, a job/problem that exists for them, and only propositions that we’ve successfully tested with non-product MVP’s.

But wait– what does that exactly mean: ‘only propositions that we’ve successfully tested with non-product MVP’s’? In practice, there’s no such thing as fully validating a proposition. You’re constantly looking at user behavior and deciding where you’d be best off improving. To create balance and consistency from sprint to sprint, I like to use a ‘ UX map ‘. You can read more about it at that link but the basic idea is that for a given JTBD:VP pairing you map out the customer experience (CX) arc broken into progressive stages that each have a description, a dependent variable you’ll observe to assess success, and ideas on things (independent variables or ‘IV’s’) to test. For example, here’s what such a UX map might look like for HVAC in a Hurry’s work on the JTBD of ‘getting replacement parts to a job site’.

creating and testing a demand/value hypothesis

From there, how can we use HDD to bring better, more testable design into the development process? One thing I like to do with user stories and HDD is to make a habit of pairing every single story with a simple, analytical question that would tell me whether the story is ‘done’ from the standpoint of creating the target user behavior or not. From there, I consider focal metrics. Here’s what that might look like at HinH.

creating and testing a demand/value hypothesis

For the last couple of decades, test and deploy/ops was often treated like a kind of stepchild to the development- something that had to happen at the end of development and was the sole responsibility of an outside group of specialists. It didn’t make sense then, and now an integral test capability is table stakes for getting to a continuous product pipeline, which at the core of HDD itself.

A continuous pipeline means that you release a lot. Getting good at releasing relieves a lot of energy-draining stress on the product team as well as creating the opportunity for rapid learning that HDD requires. Interestingly, research by outfits like DORA (now part of Google) and CircleCI shows teams that are able to do this both release faster and encounter fewer bugs in production.

Amazon famously releases code every 11.6 seconds. What this means is that a developer can push a button to commit code and everything from there to that code showing up in front of a customer is automated. How does that happen? For starters, there are two big (related) areas: Test & Deploy.

While there is some important plumbing that I’ll cover in the next couple of sections, in practice most teams struggle with test coverage. What does that mean? In principal, what it means is that even though you can’t test everything, you iterate to test automation coverage that is catching most bugs before they end up in front of a user. For most teams, that means a ‘pyramid’ of tests like you see here, where the x-axis the number of tests and the y-axis is the level of abstraction of the tests.

test-pyramid-v2

The reason for the pyramid shape is that the tests are progressively more work to create and maintain, and also each one provides less and less isolation about where a bug actually resides. In terms of iteration and retrospectives, what this means is that you’re always asking ‘What’s the lowest level test that could have caught this bug?’.

Unit tests isolate the operation of a single function and make sure it works as expected. Integration tests span two functions and system tests, as you’d guess, more or less emulate the way a user or endpoint would interact with a system.

Feature Flags: These are a separate but somewhat complimentary facility. The basic idea is that as you add new features, they each have a flag that can enable or disable them. They are start out disabled and you make sure they don’t break anything. Then, on small sets of users, you can enable them and test whether a) the metrics look normal and nothing’s broken and, closer to the core of HDD, whether users are actually interacting with the new feature.

In the olden days (which is when I last did this kind of thing for work), if you wanted to update a web application, you had to log in to a server, upload the software, and then configure it, maybe with the help of some scripts. Very often, things didn’t go accordingly to plan for the predictable reason that there was a lot of opportunity for variation between how the update was tested and the machine you were updating, not to mention how you were updating.

Now computers do all that- but you still have to program them. As such, the job of deployment has increasingly become a job where you’re coding solutions on top of platforms like Kubernetes, Chef, and Terraform. These folks are (hopefully) working closely with developers on this. For example, rather than spending time and money on writing documentation for an upgrade, the team would collaborate on code/config. that runs on the kind of application I mentioned earlier.

Pipeline Automation

Most teams with a continuous pipeline orchestrate something like what you see below with an application made for this like Jenkins or CircleCI. The Manual Validation step you see is, of course, optional and not a prevalent part of a truly continuous delivery. In fact, if you automate up to the point of a staging server or similar before you release, that’s what’s generally called continuous integration.

Finally, the two yellow items you see are where the team centralizes their code (version control) and the build that they’re taking from commit to deploy (artifact repository).

Continuous-Delivery

To recap, what’s the hypothesis?

Well, you can’t test everything but you can make sure that you’re testing what tends to affect your users and likewise in the deployment process. I’d summarize this area of HDD as follows:

CD-Hypothesis

01 IDEA : You can’t test everything and you can’t foresee everything that might go wrong. This is important for the team to internalize. But you can iteratively, purposefully focus your test investments.

02 HYPOTHESIS : Relative to the test pyramid, you’re looking to get to a place where you’re finding issues with the least expensive, least complex test possible- not an integration test when a unit test could have caught the issue, and so forth.

03 EXPERIMENTAL DESIGN : As you run integrations and deployments, you see what happens! Most teams move from continuous integration (deploy-ready system that’s not actually in front of customers) to continuous deployment.

04 EXPERIMENTATION : In  retrospectives, it’s important to look at the tests suite and ask what would have made the most sense and how the current processes were or weren’t facilitating that.

05: PIVOT OR PERSEVERE : It takes work, but teams get there all the time- and research shows they end up both releasing more often and encounter fewer production bugs, believe it or not!

Topline, I would say it’s a way to unify and focus your work across those disciplines. I’ve found that’s a pretty big deal. While none of those practices are hard to understand, practice on the ground is patchy. Usually, the problem is having the confidence that doing things well is going to be worthwhile, and knowing who should be participating when.

My hope is that with this guide and the supporting material (and of course the wider body of practice), that teams will get in the habit of always having a set of hypotheses and that will improve their work and their confidence as a team.

Naturally, these various disciplines have a lot to do with each other, and I’ve summarized some of that here:

Hypothesis-Driven-Dev-Diagram

Mostly, I find practitioners learn about this through their work, but I’ll point out a few big points of intersection that I think are particularly notable:

  • Learn by Observing Humans We all tend to jump on solutions and over invest in them when we should be observing our user, seeing how they behave, and then iterating. HDD helps reinforce problem-first diagnosis through its connections to relevant practice.
  • Focus on What Users Actually Do A lot of thing might happen- more than we can deal with properly. The goods news is that by just observing what actually happens you can make things a lot easier on yourself.
  • Move Fast, but Minimize Blast Radius Working across so many types of org’s at present (startups, corporations, a university), I can’t overstate how important this is and yet how big a shift it is for more traditional organizations. The idea of ‘moving fast and breaking things’ is terrifying to these places, and the reality is with practice you can move fast and rarely break things/only break them a tiny bit. Without this, you end up stuck waiting for someone else to create the perfect plan or for that next super important hire to fix everything (spoiler: it won’t and they don’t).
  • Minimize Waste Succeeding at innovation is improbable, and yet it happens all the time. Practices like Lean Startup do not warrant that by following them you’ll always succeed; however, they do promise that by minimizing waste you can test five ideas in the time/money/energy it would otherwise take you to test one, making the improbable probable.

What I love about Hypothesis-Driven Development is that it solves a really hard problem with practice: that all these behaviors are important and yet you can’t learn to practice them all immediately. What HDD does is it gives you a foundation where you can see what’s similar across these and how your practice in one is reenforcing the other. It’s also a good tool to decide where you need to focus on any given project or team.

Copyright © 2022 Alex Cowan · All rights reserved.

Shipping Your Product in Iterations: A Guide to Hypothesis Testing

Glancing at the App Store on any phone will reveal that most installed apps have had updates released within the last week. Software products today are shipped in iterations to validate assumptions and hypotheses about what makes the product experience better for users.

Shipping Your Product in Iterations: A Guide to Hypothesis Testing

By Kumara Raghavendra

Kumara has successfully delivered high-impact products in various industries ranging from eCommerce, healthcare, travel, and ride-hailing.

PREVIOUSLY AT

A look at the App Store on any phone will reveal that most installed apps have had updates released within the last week. A website visit after a few weeks might show some changes in the layout, user experience, or copy.

Today, software is shipped in iterations to validate assumptions and the product hypothesis about what makes a better user experience. At any given time, companies like booking.com (where I worked before) run hundreds of A/B tests on their sites for this very purpose.

For applications delivered over the internet, there is no need to decide on the look of a product 12-18 months in advance, and then build and eventually ship it. Instead, it is perfectly practical to release small changes that deliver value to users as they are being implemented, removing the need to make assumptions about user preferences and ideal solutions—for every assumption and hypothesis can be validated by designing a test to isolate the effect of each change.

In addition to delivering continuous value through improvements, this approach allows a product team to gather continuous feedback from users and then course-correct as needed. Creating and testing hypotheses every couple of weeks is a cheaper and easier way to build a course-correcting and iterative approach to creating product value .

What Is Hypothesis Testing in Product Management?

While shipping a feature to users, it is imperative to validate assumptions about design and features in order to understand their impact in the real world.

This validation is traditionally done through product hypothesis testing , during which the experimenter outlines a hypothesis for a change and then defines success. For instance, if a data product manager at Amazon has a hypothesis that showing bigger product images will raise conversion rates, then success is defined by higher conversion rates.

One of the key aspects of hypothesis testing is the isolation of different variables in the product experience in order to be able to attribute success (or failure) to the changes made. So, if our Amazon product manager had a further hypothesis that showing customer reviews right next to product images would improve conversion, it would not be possible to test both hypotheses at the same time. Doing so would result in failure to properly attribute causes and effects; therefore, the two changes must be isolated and tested individually.

Thus, product decisions on features should be backed by hypothesis testing to validate the performance of features.

Different Types of Hypothesis Testing

A/b testing.

A/B testing in product hypothesis testing

One of the most common use cases to achieve hypothesis validation is randomized A/B testing, in which a change or feature is released at random to one-half of users (A) and withheld from the other half (B). Returning to the hypothesis of bigger product images improving conversion on Amazon, one-half of users will be shown the change, while the other half will see the website as it was before. The conversion will then be measured for each group (A and B) and compared. In case of a significant uplift in conversion for the group shown bigger product images, the conclusion would be that the original hypothesis was correct, and the change can be rolled out to all users.

Multivariate Testing

Multivariate testing in product hypothesis testing

Ideally, each variable should be isolated and tested separately so as to conclusively attribute changes. However, such a sequential approach to testing can be very slow, especially when there are several versions to test. To continue with the example, in the hypothesis that bigger product images lead to higher conversion rates on Amazon, “bigger” is subjective, and several versions of “bigger” (e.g., 1.1x, 1.3x, and 1.5x) might need to be tested.

Instead of testing such cases sequentially, a multivariate test can be adopted, in which users are not split in half but into multiple variants. For instance, four groups (A, B, C, D) are made up of 25% of users each, where A-group users will not see any change, whereas those in variants B, C, and D will see images bigger by 1.1x, 1.3x, and 1.5x, respectively. In this test, multiple variants are simultaneously tested against the current version of the product in order to identify the best variant.

Before/After Testing

Sometimes, it is not possible to split the users in half (or into multiple variants) as there might be network effects in place. For example, if the test involves determining whether one logic for formulating surge prices on Uber is better than another, the drivers cannot be divided into different variants, as the logic takes into account the demand and supply mismatch of the entire city. In such cases, a test will have to compare the effects before the change and after the change in order to arrive at a conclusion.

Before/after testing in product hypothesis testing

However, the constraint here is the inability to isolate the effects of seasonality and externality that can differently affect the test and control periods. Suppose a change to the logic that determines surge pricing on Uber is made at time t , such that logic A is used before and logic B is used after. While the effects before and after time t can be compared, there is no guarantee that the effects are solely due to the change in logic. There could have been a difference in demand or other factors between the two time periods that resulted in a difference between the two.

Time-based On/Off Testing

Time-based on/off testing in product hypothesis testing

The downsides of before/after testing can be overcome to a large extent by deploying time-based on/off testing, in which the change is introduced to all users for a certain period of time, turned off for an equal period of time, and then repeated for a longer duration.

For example, in the Uber use case, the change can be shown to drivers on Monday, withdrawn on Tuesday, shown again on Wednesday, and so on.

While this method doesn’t fully remove the effects of seasonality and externality, it does reduce them significantly, making such tests more robust.

Test Design

Choosing the right test for the use case at hand is an essential step in validating a hypothesis in the quickest and most robust way. Once the choice is made, the details of the test design can be outlined.

The test design is simply a coherent outline of:

  • The hypothesis to be tested: Showing users bigger product images will lead them to purchase more products.
  • Success metrics for the test: Customer conversion
  • Decision-making criteria for the test: The test validates the hypothesis that users in the variant show a higher conversion rate than those in the control group.
  • Metrics that need to be instrumented to learn from the test: Customer conversion, clicks on product images

In the case of the product hypothesis example that bigger product images will lead to improved conversion on Amazon, the success metric is conversion and the decision criteria is an improvement in conversion.

After the right test is chosen and designed, and the success criteria and metrics are identified, the results must be analyzed. To do that, some statistical concepts are necessary.

When running tests, it is important to ensure that the two variants picked for the test (A and B) do not have a bias with respect to the success metric. For instance, if the variant that sees the bigger images already has a higher conversion than the variant that doesn’t see the change, then the test is biased and can lead to wrong conclusions.

In order to ensure no bias in sampling, one can observe the mean and variance for the success metric before the change is introduced.

Significance and Power

Once a difference between the two variants is observed, it is important to conclude that the change observed is an actual effect and not a random one. This can be done by computing the significance of the change in the success metric.

In layman’s terms, significance measures the frequency with which the test shows that bigger images lead to higher conversion when they actually don’t. Power measures the frequency with which the test tells us that bigger images lead to higher conversion when they actually do.

So, tests need to have a high value of power and a low value of significance for more accurate results.

While an in-depth exploration of the statistical concepts involved in product management hypothesis testing is out of scope here, the following actions are recommended to enhance knowledge on this front:

  • Data analysts and data engineers are usually adept at identifying the right test designs and can guide product managers, so make sure to utilize their expertise early in the process.
  • There are numerous online courses on hypothesis testing, A/B testing, and related statistical concepts, such as Udemy , Udacity , and Coursera .
  • Using tools such as Google’s Firebase and Optimizely can make the process easier thanks to a large amount of out-of-the-box capabilities for running the right tests.

Using Hypothesis Testing for Successful Product Management

In order to continuously deliver value to users, it is imperative to test various hypotheses, for the purpose of which several types of product hypothesis testing can be employed. Each hypothesis needs to have an accompanying test design, as described above, in order to conclusively validate or invalidate it.

This approach helps to quantify the value delivered by new changes and features, bring focus to the most valuable features, and deliver incremental iterations.

  • How to Conduct Remote User Interviews [Infographic]
  • A/B Testing UX for Component-based Frameworks
  • Building an AI Product? Maximize Value With an Implementation Framework

Further Reading on the Toptal Blog:

  • Evolving UX: Experimental Product Design with a CXO
  • How to Conduct Usability Testing in Six Steps
  • 3 Product-led Growth Frameworks to Build Your Business
  • A Product Designer’s Guide to Competitive Analysis

Understanding the basics

What is a product hypothesis.

A product hypothesis is an assumption that some improvement in the product will bring an increase in important metrics like revenue or product usage statistics.

What are the three required parts of a hypothesis?

The three required parts of a hypothesis are the assumption, the condition, and the prediction.

Why do we do A/B testing?

We do A/B testing to make sure that any improvement in the product increases our tracked metrics.

What is A/B testing used for?

A/B testing is used to check if our product improvements create the desired change in metrics.

What is A/B testing and multivariate testing?

A/B testing and multivariate testing are types of hypothesis testing. A/B testing checks how important metrics change with and without a single change in the product. Multivariate testing can track multiple variations of the same product improvement.

Kumara Raghavendra

Dubai, United Arab Emirates

Member since August 6, 2019

About the author

World-class articles, delivered weekly.

By entering your email, you are agreeing to our privacy policy .

Toptal Product Managers

  • Artificial Intelligence Product Managers
  • Blockchain Product Managers
  • Business Systems Analysts
  • Cloud Product Managers
  • Data Science Product Managers
  • Digital Marketing Product Managers
  • Digital Product Managers
  • Directors of Product
  • eCommerce Product Managers
  • Enterprise Product Managers
  • Enterprise Resource Planning Product Managers
  • Freelance Product Managers
  • Interim CPOs
  • Jira Product Managers
  • Kanban Product Managers
  • Lean Product Managers
  • Mobile Product Managers
  • Product Consultants
  • Product Development Managers
  • Product Owners
  • Product Portfolio Managers
  • Product Strategy Consultants
  • Product Tour Consultants
  • Robotic Process Automation Product Managers
  • Robotics Product Managers
  • SaaS Product Managers
  • Salesforce Product Managers
  • Scrum Product Owner Contractors
  • Web Product Managers
  • View More Freelance Product Managers

Join the Toptal ® community.

Kalinga Institute of Industrial Technology

  • Top Courses

University of Virginia

Hypothesis-Driven Development

This course is part of multiple programs. Learn more

This course is part of multiple programs

Taught in English

Some content may not be translated

Alex Cowan

Instructor: Alex Cowan

Sponsored by Kalinga Institute of Industrial Technology

49,881 already enrolled

(951 reviews)

What you'll learn

How to drive valuable outcomes for your user and reduce waste for your team by diagnosing and prioritizing what you need to know about them

How to focus your practice of agile by pairing qualitative and quantitative analytics

How to do just enough research when you need it by running design sprints

How to accelerate value delivery by investing in your product pipeline

Skills you'll gain

  • Design and Product
  • Communication
  • Leadership and Management
  • Project Management
  • User Experience
  • Software Engineering
  • Software Testing

Details to know

creating and testing a demand/value hypothesis

Add to your LinkedIn profile

See how employees at top companies are mastering in-demand skills

Placeholder

Build your subject-matter expertise

  • Learn new concepts from industry experts
  • Gain a foundational understanding of a subject or tool
  • Develop job-relevant skills with hands-on projects
  • Earn a shareable career certificate

Placeholder

Earn a career certificate

Add this credential to your LinkedIn profile, resume, or CV

Share it on social media and in your performance review

Placeholder

There are 4 modules in this course

To deliver agile outcomes, you have to do more than implement agile processes- you have to create focus around what matters to your user and constantly test your ideas. This is easier said than done, but most of today’s high-functioning innovators have a strong culture of experimentation.

In this course, you’ll learn how to identify the right questions at the right time, and pair them with the right methods to do just enough testing to make sure you minimize waste and maximize the outcomes you create with your user. This course is supported by the Batten Institute at UVA’s Darden School of Business. The Batten Institute’s mission is to improve the world through entrepreneurship and innovation: www.batteninstitute.org.

How Do We Know if We're Building for a User that Doesn't Exist?

How do you go from backlog grooming to blockbuster results with agile? Hypothesis-driven decisions. Specifically, you need to shift your teammates focus from their natural tendency to focus on their own output to focusing out user outcomes. Easier said than done, but getting everyone excited about results of an experiment is one of the most reliable ways to get there. This week, we’ll focus on how you get started in a practical way.

What's included

22 videos 1 reading 1 quiz

22 videos • Total 88 minutes

  • Course Introduction • 4 minutes • Preview module
  • Hypotheses-Driven Development & Your Product Pipeline • 7 minutes
  • Introducing Example Company: HVAC in a Hurry • 1 minute
  • Driving Outcomes With Your Product Pipeline • 7 minutes
  • The Persona Hypothesis • 3 minutes
  • The JTBD Hypothesis • 3 minutes
  • The Demand Hypothesis • 2 minutes
  • The Usability Hypothesis • 2 minutes
  • The Collaboration Hypothesis • 2 minutes
  • The Functional Hypothesis • 2 minutes
  • Driving to Value with Your Persona & JTBD Hypothesis • 2 minutes
  • Example Personas and Jobs-to-be-Done • 4 minutes
  • Setting Up Interviews • 3 minutes
  • Prepping for Subject Interviews • 3 minutes
  • Conducting the Interview • 6 minutes
  • How Not to Interview • 6 minutes
  • Day in the Life • 4 minutes
  • You and Your Next Design Sprint • 4 minutes
  • The Practice of Time Boxing • 4 minutes
  • Overview of the Persona and JTBD Sprint • 2 minutes
  • How Do I Sell the Idea of a Design Sprint • 4 minutes
  • Your Persona & JTBD Hypotheses: What's Next For You? • 3 minutes

1 reading • Total 15 minutes

  • Course Overview & Requirements • 15 minutes

1 quiz • Total 20 minutes

  • Week 1 Quiz • 20 minutes

How Do We Reduce Waste & Increase Wins by Testing Our Propositions Before We Build Them?

Nothing will help a team deliver better outcomes like making sure they’re building something the user values. This might sound simple or obvious, but I think after this week it’s likely you’ll find opportunities to help improve your team’s focus by testing ideas more definitively before you invest in developing software. In this module, you’ll learn how to make concept testing an integral part of your product pipeline. We’ll continue to apply methods from Lean Startup, looking at how they pair with agile. We’ll look at how high-functioning teams design and run situation-appropriate experiments to test ideas, and how that works before the fact (when you’re testing an idea) and after the fact (when you’re testing the value of software you’ve released).

20 videos 1 quiz 1 discussion prompt

20 videos • Total 120 minutes

  • Creating More Wins • 5 minutes • Preview module
  • Describing the Customer Experience (CX) for Testability • 8 minutes
  • CX Mapping for Prioritization and Testing • 6 minutes
  • Testing Demand Hypotheses with MVP's • 4 minutes
  • Learning What's Valuable • 7 minutes
  • Introducing Enable Quiz • 1 minute
  • Business to Consumer Case Studies • 9 minutes
  • Business to Business Case Studies • 6 minutes
  • Using a Design Sprint to Test Your Demand Hypothesis • 3 minutes
  • Lean Startup and Learning from Practice • 0 minutes
  • Interview: Tristan Kromer on the Practice of Lean Startup • 6 minutes
  • Interview: David Bland on the Practice of Lean Startup • 5 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 1 • 7 minutes
  • Interview: Tristan Kromer on Creating a Culture of Experimentation Part 2 • 6 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 1 • 4 minutes
  • Interview: David Bland on Creating a Culture of Experimentation: Part 2 • 9 minutes
  • Interview: David Bland on Marrying Agile to Lean Startup • 7 minutes
  • Interview: David Bland on Using Hypothesis with Agile • 5 minutes
  • Interview: Laura Klein on the Right Kind of Research • 10 minutes
  • Your Demand Hypotheses: What's next for you? • 3 minutes
  • Week 2 Quiz • 20 minutes

1 discussion prompt • Total 15 minutes

  • Learnings from David, Tristan, and Laura • 15 minutes

How Do We Consistently Deliver Great Usability?

The best products are tested for usability early and often, avoiding the destructive stress and uncertainty of a "big unveil." In this module, you’ll learn how to diagnose, design and execute phase-appropriate user testing. The tools you’ll learn to use here (a test plan template, prototyping tool, and test session infrastructure) are accessible/teachable to anyone on your team. And that’s a very good thing -- often products are released with poor usability because there "wasn’t enough time" to test it. With these techniques, you’ll be able to test early and often, reinforcing your culture of experimentation.

19 videos 1 quiz 1 discussion prompt

19 videos • Total 90 minutes

  • The Always Test • 4 minutes • Preview module
  • A Test-Driven Approach to Usability • 5 minutes
  • The Inexact Science of Interface Design • 6 minutes
  • Diagnosing Usability with Donald Norman's 7 Steps Model • 8 minutes
  • Fixing Usability with Donald Norman's 7 Steps Model • 3 minutes
  • Applying the 7 Steps Model to Hypothesis-Driven Development • 3 minutes
  • Fixing the Visceral Layer • 4 minutes
  • Fixing the Behavioral Layer: The Importance of Comparables & Prototyping • 9 minutes
  • Prototyping With Balsamiq • 4 minutes
  • Usability Testing: Fun & Affordable • 2 minutes
  • The Right Testing at the Right Time • 2 minutes
  • A Test Plan Anyone Can Use • 6 minutes
  • Creating Good Test Items • 3 minutes
  • Running a Usability Design Sprint • 3 minutes
  • Running a Usability Design Sprint Skit • 5 minutes
  • Interview: Laura Klein on Qualitative vs. Quantitative Research • 4 minutes
  • Interview: Laura Klein on Lean UX in Enterprise IT • 5 minutes
  • Prioritizing User Outcomes with Story Mapping • 4 minutes
  • Your Usability Hypotheses: What's Next For You? • 3 minutes
  • Week 3 Quiz • 20 minutes
  • How will these techniques help you? • 15 minutes

How Do We Invest to Move Fast?

You’ve learned how to test ideas and usability to reduce the amount of software your team needs to build and to focus its execution. Now you’re going to learn how high-functioning teams approach testing of the software itself. The practice of continuous delivery and the closely related Devops movement are changing the way we build and release software. It wasn’t that long ago where 2-3 releases a year was considered standard. Now, Amazon, for example, releases code every 11.6 seconds. This week, we’ll look at the delivery pipeline and step through what successful practitioners do at each stage and how you can diagnose and apply the practices that will improve your implementation of agile.

24 videos 1 quiz 1 peer review

24 videos • Total 128 minutes

  • Functional Hypotheses and Continous Delivery • 6 minutes • Preview module
  • The Team that Releases Together • 4 minutes
  • Getting Started with Continuous Delivery • 3 minutes
  • Anders Wallgren on Getting Started • 4 minutes
  • The Test Pyramid • 6 minutes
  • The Commit & Small Tests Stage • 2 minutes
  • The Job of Version Control • 3 minutes
  • Medium Tests • 1 minute
  • Large Tests • 6 minutes
  • Creating Large/Behavioral Tests • 9 minutes
  • Anders Wallgren on Functional Testing • 9 minutes
  • Release Stage • 4 minutes
  • The Job of Deploying • 6 minutes
  • Anders Wallgren on Deployment • 2 minutes
  • Chris Kent on Developing with Continuous Delivery • 10 minutes
  • Chris Kent on Continuous Deployment • 11 minutes
  • Test-Driven General Management • 5 minutes
  • Narrative and the 'Happy Path' • 3 minutes
  • The Emergence of DevOps and the Ascent of Continuous Delivery • 4 minutes
  • Design for Deployability • 2 minutes
  • Anders Wallgren on Continuous Deployment • 3 minutes
  • Anders Wallgren on Creating a Friendly Environment for Continuous Deployment • 6 minutes
  • Your Functional Hypotheses: What's Next For You? • 2 minutes
  • Course Conclusion • 8 minutes
  • Week 4 Quiz • 20 minutes

1 peer review • Total 90 minutes

  • Creating and Testing a Demand/Value Hypothesis • 90 minutes

Instructor ratings

We asked all learners to give feedback on our instructors based on the quality of their teaching style.

creating and testing a demand/value hypothesis

A premier institution of higher education, The University of Virginia offers outstanding academics, world-class faculty, and an inspiring, supportive environment. Founded by Thomas Jefferson in 1819, the University is guided by his vision of discovery, innovation, and development of the full potential of students from all walks of life. Through these courses, global learners have an opportunity to study with renowned scholars and thought leaders.

Why people choose Coursera for their career

creating and testing a demand/value hypothesis

Learner reviews

Showing 3 of 951

951 reviews

Reviewed on Feb 9, 2017

Provides a lot of good insight on getting new code to production in a fast, high quality manner. The course has help me focus on the deployment pipeline.

Reviewed on Jan 9, 2022

I loved the whole specialization has a lot of benefits about product management from A to Z and especially this course was discussing every point in more detail for the whole specialization.

Reviewed on Sep 25, 2018

This course actually bring all that knowledge into light which has been taught in Course 1-3. all videos specially the interview are the essence of this course.

Recommended if you're interested in Computer Science

creating and testing a demand/value hypothesis

University of Virginia

Product Analytics and AI

creating and testing a demand/value hypothesis

Managing an Agile Team

creating and testing a demand/value hypothesis

Agile Meets Design Thinking

creating and testing a demand/value hypothesis

Arizona State University

Inglés Empresarial: Proyecto Final

Placeholder

Open new doors with Coursera Plus

Unlimited access to 7,000+ world-class courses, hands-on projects, and job-ready certificate programs - all included in your subscription

Advance your career with an online degree

Earn a degree from world-class universities - 100% online

Join over 3,400 global companies that choose Coursera for Business

Upskill your employees to excel in the digital economy

  • Product Management

How to Generate and Validate Product Hypotheses

What is a product hypothesis.

A hypothesis is a testable statement that predicts the relationship between two or more variables. In product development, we generate hypotheses to validate assumptions about customer behavior, market needs, or the potential impact of product changes. These experimental efforts help us refine the user experience and get closer to finding a product-market fit.

Product hypotheses are a key element of data-driven product development and decision-making. Testing them enables us to solve problems more efficiently and remove our own biases from the solutions we put forward.

Here’s an example: ‘If we improve the page load speed on our website (variable 1), then we will increase the number of signups by 15% (variable 2).’ So if we improve the page load speed, and the number of signups increases, then our hypothesis has been proven. If the number did not increase significantly (or not at all), then our hypothesis has been disproven.

In general, product managers are constantly creating and testing hypotheses. But in the context of new product development , hypothesis generation/testing occurs during the validation stage, right after idea screening .

Now before we go any further, let’s get one thing straight: What’s the difference between an idea and a hypothesis?

Idea vs hypothesis

Innovation expert Michael Schrage makes this distinction between hypotheses and ideas – unlike an idea, a hypothesis comes with built-in accountability. “But what’s the accountability for a good idea?” Schrage asks. “The fact that a lot of people think it’s a good idea? That’s a popularity contest.” So, not only should a hypothesis be tested, but by its very nature, it can be tested.

At Railsware, we’ve built our product development services on the careful selection, prioritization, and validation of ideas. Here’s how we distinguish between ideas and hypotheses:

Idea: A creative suggestion about how we might exploit a gap in the market, add value to an existing product, or bring attention to our product. Crucially, an idea is just a thought. It can form the basis of a hypothesis but it is not necessarily expected to be proven or disproven.

  • We should get an interview with the CEO of our company published on TechCrunch.
  • Why don’t we redesign our website?
  • The Coupler.io team should create video tutorials on how to export data from different apps, and publish them on YouTube.
  • Why not add a new ‘email templates’ feature to our Mailtrap product?

Hypothesis: A way of framing an idea or assumption so that it is testable, specific, and aligns with our wider product/team/organizational goals.

Examples: 

  • If we add a new ‘email templates’ feature to Mailtrap, we’ll see an increase in active usage of our email-sending API.
  • Creating relevant video tutorials and uploading them to YouTube will lead to an increase in Coupler.io signups.
  • If we publish an interview with our CEO on TechCrunch, 500 people will visit our website and 10 of them will install our product.

Now, it’s worth mentioning that not all hypotheses require testing . Sometimes, the process of creating hypotheses is just an exercise in critical thinking. And the simple act of analyzing your statement tells whether you should run an experiment or not. Remember: testing isn’t mandatory, but your hypotheses should always be inherently testable.

Let’s consider the TechCrunch article example again. In that hypothesis, we expect 500 readers to visit our product website, and a 2% conversion rate of those unique visitors to product users i.e. 10 people. But is that marginal increase worth all the effort? Conducting an interview with our CEO, creating the content, and collaborating with the TechCrunch content team – all of these tasks take time (and money) to execute. And by formulating that hypothesis, we can clearly see that in this case, the drawbacks (efforts) outweigh the benefits. So, no need to test it.

In a similar vein, a hypothesis statement can be a tool to prioritize your activities based on impact. We typically use the following criteria:

  • The quality of impact
  • The size of the impact
  • The probability of impact

This lets us organize our efforts according to their potential outcomes – not the coolness of the idea, its popularity among the team, etc.

Now that we’ve established what a product hypothesis is, let’s discuss how to create one.

Start with a problem statement

Before you jump into product hypothesis generation, we highly recommend formulating a problem statement. This is a short, concise description of the issue you are trying to solve. It helps teams stay on track as they formalize the hypothesis and design the product experiments. It can also be shared with stakeholders to ensure that everyone is on the same page.

The statement can be worded however you like, as long as it’s actionable, specific, and based on data-driven insights or research. It should clearly outline the problem or opportunity you want to address.

Here’s an example: Our bounce rate is high (more than 90%) and we are struggling to convert website visitors into actual users. How might we improve site performance to boost our conversion rate?

How to generate product hypotheses

Now let’s explore some common, everyday scenarios that lead to product hypothesis generation. For our teams here at Railsware, it’s when:

  • There’s a problem with an unclear root cause e.g. a sudden drop in one part of the onboarding funnel. We identify these issues by checking our product metrics or reviewing customer complaints.
  • We are running ideation sessions on how to reach our goals (increase MRR, increase the number of users invited to an account, etc.)
  • We are exploring growth opportunities e.g. changing a pricing plan, making product improvements , breaking into a new market.
  • We receive customer feedback. For example, some users have complained about difficulties setting up a workspace within the product. So, we build a hypothesis on how to help them with the setup.

BRIDGES framework for ideation

When we are tackling a complex problem or looking for ways to grow the product, our teams use BRIDGeS – a robust decision-making and ideation framework. BRIDGeS makes our product discovery sessions more efficient. It lets us dive deep into the context of our problem so that we can develop targeted solutions worthy of testing.

Between 2-8 stakeholders take part in a BRIDGeS session. The ideation sessions are usually led by a product manager and can include other subject matter experts such as developers, designers, data analysts, or marketing specialists. You can use a virtual whiteboard such as Figjam or Miro (see our Figma template ) to record each colored note.

In the first half of a BRIDGeS session, participants examine the Benefits, Risks, Issues, and Goals of their subject in the ‘Problem Space.’ A subject is anything that is being described or dealt with; for instance, Coupler.io’s growth opportunities. Benefits are the value that a future solution can bring, Risks are potential issues they might face, Issues are their existing problems, and Goals are what the subject hopes to gain from the future solution. Each descriptor should have a designated color.

After we have broken down the problem using each of these descriptors, we move into the Solution Space. This is where we develop solution variations based on all of the benefits/risks/issues identified in the Problem Space (see the Uber case study for an in-depth example).

In the Solution Space, we start prioritizing those solutions and deciding which ones are worthy of further exploration outside of the framework – via product hypothesis formulation and testing, for example. At the very least, after the session, we will have a list of epics and nested tasks ready to add to our product roadmap.

How to write a product hypothesis statement

Across organizations, product hypothesis statements might vary in their subject, tone, and precise wording. But some elements never change. As we mentioned earlier, a hypothesis statement must always have two or more variables and a connecting factor.

1. Identify variables

Since these components form the bulk of a hypothesis statement, let’s start with a brief definition.

First of all, variables in a hypothesis statement can be split into two camps: dependent and independent. Without getting too theoretical, we can describe the independent variable as the cause, and the dependent variable as the effect . So in the Mailtrap example we mentioned earlier, the ‘add email templates feature’ is the cause i.e. the element we want to manipulate. Meanwhile, ‘increased usage of email sending API’ is the effect i.e the element we will observe.

Independent variables can be any change you plan to make to your product. For example, tweaking some landing page copy, adding a chatbot to the homepage, or enhancing the search bar filter functionality.

Dependent variables are usually metrics. Here are a few that we often test in product development:

  • Number of sign-ups
  • Number of purchases
  • Activation rate (activation signals differ from product to product)
  • Number of specific plans purchased
  • Feature usage (API activation, for example)
  • Number of active users

Bear in mind that your concept or desired change can be measured with different metrics. Make sure that your variables are well-defined, and be deliberate in how you measure your concepts so that there’s no room for misinterpretation or ambiguity.

For example, in the hypothesis ‘Users drop off because they find it hard to set up a project’ variables are poorly defined. Phrases like ‘drop off’ and ‘hard to set up’ are too vague. A much better way of saying it would be: If project automation rules are pre-defined (email sequence to responsible, scheduled tickets creation), we’ll see a decrease in churn. In this example, it’s clear which dependent variable has been chosen and why.

And remember, when product managers focus on delighting users and building something of value, it’s easier to market and monetize it. That’s why at Railsware, our product hypotheses often focus on how to increase the usage of a feature or product. If users love our product(s) and know how to leverage its benefits, we can spend less time worrying about how to improve conversion rates or actively grow our revenue, and more time enhancing the user experience and nurturing our audience.

2. Make the connection

The relationship between variables should be clear and logical. If it’s not, then it doesn’t matter how well-chosen your variables are – your test results won’t be reliable.

To demonstrate this point, let’s explore a previous example again: page load speed and signups.

Through prior research, you might already know that conversion rates are 3x higher for sites that load in 1 second compared to sites that take 5 seconds to load. Since there appears to be a strong connection between load speed and signups in general, you might want to see if this is also true for your product.

Here are some common pitfalls to avoid when defining the relationship between two or more variables:

Relationship is weak. Let’s say you hypothesize that an increase in website traffic will lead to an increase in sign-ups. This is a weak connection since website visitors aren’t necessarily motivated to use your product; there are more steps involved. A better example is ‘If we change the CTA on the pricing page, then the number of signups will increase.’ This connection is much stronger and more direct.

Relationship is far-fetched. This often happens when one of the variables is founded on a vanity metric. For example, increasing the number of social media subscribers will lead to an increase in sign-ups. However, there’s no particular reason why a social media follower would be interested in using your product. Oftentimes, it’s simply your social media content that appeals to them (and your audience isn’t interested in a product).

Variables are co-dependent. Variables should always be isolated from one another. Let’s say we removed the option “Register with Google” from our app. In this case, we can expect fewer users with Google workspace accounts to register. Obviously, it’s because there’s a direct dependency between variables (no registration with Google→no users with Google workspace accounts).

3. Set validation criteria

First, build some confirmation criteria into your statement . Think in terms of percentages (e.g. increase/decrease by 5%) and choose a relevant product metric to track e.g. activation rate if your hypothesis relates to onboarding. Consider that you don’t always have to hit the bullseye for your hypothesis to be considered valid. Perhaps a 3% increase is just as acceptable as a 5% one. And it still proves that a connection between your variables exists.

Secondly, you should also make sure that your hypothesis statement is realistic . Let’s say you have a hypothesis that ‘If we show users a banner with our new feature, then feature usage will increase by 10%.’ A few questions to ask yourself are: Is 10% a reasonable increase, based on your current feature usage data? Do you have the resources to create the tests (experimenting with multiple variations, distributing on different channels: in-app, emails, blog posts)?

Null hypothesis and alternative hypothesis

In statistical research, there are two ways of stating a hypothesis: null or alternative. But this scientific method has its place in hypothesis-driven development too…

Alternative hypothesis: A statement that you intend to prove as being true by running an experiment and analyzing the results. Hint: it’s the same as the other hypothesis examples we’ve described so far.

Example: If we change the landing page copy, then the number of signups will increase.

Null hypothesis: A statement you want to disprove by running an experiment and analyzing the results. It predicts that your new feature or change to the user experience will not have the desired effect.

Example: The number of signups will not increase if we make a change to the landing page copy.

What’s the point? Well, let’s consider the phrase ‘innocent until proven guilty’ as a version of a null hypothesis. We don’t assume that there is any relationship between the ‘defendant’ and the ‘crime’ until we have proof. So, we run a test, gather data, and analyze our findings — which gives us enough proof to reject the null hypothesis and validate the alternative. All of this helps us to have more confidence in our results.

Now that you have generated your hypotheses, and created statements, it’s time to prepare your list for testing.

Prioritizing hypotheses for testing

Not all hypotheses are created equal. Some will be essential to your immediate goal of growing the product e.g. adding a new data destination for Coupler.io. Others will be based on nice-to-haves or small fixes e.g. updating graphics on the website homepage.

Prioritization helps us focus on the most impactful solutions as we are building a product roadmap or narrowing down the backlog . To determine which hypotheses are the most critical, we use the MoSCoW framework. It allows us to assign a level of urgency and importance to each product hypothesis so we can filter the best 3-5 for testing.

MoSCoW is an acronym for Must-have, Should-have, Could-have, and Won’t-have. Here’s a breakdown:

  • Must-have – hypotheses that must be tested, because they are strongly linked to our immediate project goals.
  • Should-have – hypotheses that are closely related to our immediate project goals, but aren’t the top priority.
  • Could-have – hypotheses of nice-to-haves that can wait until later for testing. 
  • Won’t-have – low-priority hypotheses that we may or may not test later on when we have more time.

How to test product hypotheses

Once you have selected a hypothesis, it’s time to test it. This will involve running one or more product experiments in order to check the validity of your claim.

The tricky part is deciding what type of experiment to run, and how many. Ultimately, this all depends on the subject of your hypothesis – whether it’s a simple copy change or a whole new feature. For instance, it’s not necessary to create a clickable prototype for a landing page redesign. In that case, a user-wide update would do.

On that note, here are some of the approaches we take to hypothesis testing at Railsware:

A/B testing

A/B or split testing involves creating two or more different versions of a webpage/feature/functionality and collecting information about how users respond to them.

Let’s say you wanted to validate a hypothesis about the placement of a search bar on your application homepage. You could design an A/B test that shows two different versions of that search bar’s placement to your users (who have been split equally into two camps: a control group and a variant group). Then, you would choose the best option based on user data. A/B tests are suitable for testing responses to user experience changes, especially if you have more than one solution to test.

Prototyping

When it comes to testing a new product design, prototyping is the method of choice for many Lean startups and organizations. It’s a cost-effective way of collecting feedback from users, fast, and it’s possible to create prototypes of individual features too. You may take this approach to hypothesis testing if you are working on rolling out a significant new change e.g adding a brand-new feature, redesigning some aspect of the user flow, etc. To control costs at this point in the new product development process , choose the right tools — think Figma for clickable walkthroughs or no-code platforms like Bubble.

Deliveroo feature prototype example

Let’s look at how feature prototyping worked for the food delivery app, Deliveroo, when their product team wanted to ‘explore personalized recommendations, better filtering and improved search’ in 2018. To begin, they created a prototype of the customer discovery feature using web design application, Framer.

One of the most important aspects of this feature prototype was that it contained live data — real restaurants, real locations. For test users, this made the hypothetical feature feel more authentic. They were seeing listings and recommendations for real restaurants in their area, which helped immerse them in the user experience, and generate more honest and specific feedback. Deliveroo was then able to implement this feedback in subsequent iterations.

Asking your users

Interviewing customers is an excellent way to validate product hypotheses. It’s a form of qualitative testing that, in our experience, produces better insights than user surveys or general user research. Sessions are typically run by product managers and involve asking  in-depth interview questions  to one customer at a time. They can be conducted in person or online (through a virtual call center , for instance) and last anywhere between 30 minutes to 1 hour.

Although CustDev interviews may require more effort to execute than other tests (the process of finding participants, devising questions, organizing interviews, and honing interview skills can be time-consuming), it’s still a highly rewarding approach. You can quickly validate assumptions by asking customers about their pain points, concerns, habits, processes they follow, and analyzing how your solution fits into all of that.

Wizard of Oz

The Wizard of Oz approach is suitable for gauging user interest in new features or functionalities. It’s done by creating a prototype of a fake or future feature and monitoring how your customers or test users interact with it.

For example, you might have a hypothesis that your number of active users will increase by 15% if you introduce a new feature. So, you design a new bare-bones page or simple button that invites users to access it. But when they click on the button, a pop-up appears with a message such as ‘coming soon.’

By measuring the frequency of those clicks, you could learn a lot about the demand for this new feature/functionality. However, while these tests can deliver fast results, they carry the risk of backfiring. Some customers may find fake features misleading, making them less likely to engage with your product in the future.

User-wide updates

One of the speediest ways to test your hypothesis is by rolling out an update for all users. It can take less time and effort to set up than other tests (depending on how big of an update it is). But due to the risk involved, you should stick to only performing these kinds of tests on small-scale hypotheses. Our teams only take this approach when we are almost certain that our hypothesis is valid.

For example, we once had an assumption that the name of one of Mailtrap ’s entities was the root cause of a low activation rate. Being an active Mailtrap customer meant that you were regularly sending test emails to a place called ‘Demo Inbox.’ We hypothesized that the name was confusing (the word ‘demo’ implied it was not the main inbox) and this was preventing new users from engaging with their accounts. So, we updated the page, changed the name to ‘My Inbox’ and added some ‘to-do’ steps for new users. We saw an increase in our activation rate almost immediately, validating our hypothesis.

Feature flags

Creating feature flags involves only releasing a new feature to a particular subset or small percentage of users. These features come with a built-in kill switch; a piece of code that can be executed or skipped, depending on who’s interacting with your product.

Since you are only showing this new feature to a selected group, feature flags are an especially low-risk method of testing your product hypothesis (compared to Wizard of Oz, for example, where you have much less control). However, they are also a little bit more complex to execute than the others — you will need to have an actual coded product for starters, as well as some technical knowledge, in order to add the modifiers ( only when… ) to your new coded feature.

Let’s revisit the landing page copy example again, this time in the context of testing.

So, for the hypothesis ‘If we change the landing page copy, then the number of signups will increase,’ there are several options for experimentation. We could share the copy with a small sample of our users, or even release a user-wide update. But A/B testing is probably the best fit for this task. Depending on our budget and goal, we could test several different pieces of copy, such as:

  • The current landing page copy
  • Copy that we paid a marketing agency 10 grand for
  • Generic copy we wrote ourselves, or removing most of the original copy – just to see how making even a small change might affect our numbers.

Remember, every hypothesis test must have a reasonable endpoint. The exact length of the test will depend on the type of feature/functionality you are testing, the size of your user base, and how much data you need to gather. Just make sure that the experiment running time matches the hypothesis scope. For instance, there is no need to spend 8 weeks experimenting with a piece of landing page copy. That timeline is more appropriate for say, a Wizard of Oz feature.

Recording hypotheses statements and test results

Finally, it’s time to talk about where you will write down and keep track of your hypotheses. Creating a single source of truth will enable you to track all aspects of hypothesis generation and testing with ease.

At Railsware, our product managers create a document for each individual hypothesis, using tools such as Coda or Google Sheets. In that document, we record the hypothesis statement, as well as our plans, process, results, screenshots, product metrics, and assumptions.

We share this document with our team and stakeholders, to ensure transparency and invite feedback. It’s also a resource we can refer back to when we are discussing a new hypothesis — a place where we can quickly access information relating to a previous test.

Understanding test results and taking action

The other half of validating product hypotheses involves evaluating data and drawing reasonable conclusions based on what you find. We do so by analyzing our chosen product metric(s) and deciding whether there is enough data available to make a solid decision. If not, we may extend the test’s duration or run another one. Otherwise, we move forward. An experimental feature becomes a real feature, a chatbot gets implemented on the customer support page, and so on.

Something to keep in mind: the integrity of your data is tied to how well the test was executed, so here are a few points to consider when you are testing and analyzing results:

Gather and analyze data carefully. Ensure that your data is clean and up-to-date when running quantitative tests and tracking responses via analytics dashboards. If you are doing customer interviews, make sure to record the meetings (with consent) so that your notes will be as accurate as possible.

Conduct the right amount of product experiments. It can take more than one test to determine whether your hypothesis is valid or invalid. However, don’t waste too much time experimenting in the hopes of getting the result you want. Know when to accept the evidence and move on.

Choose the right audience segment. Don’t cast your net too wide. Be specific about who you want to collect data from prior to running the test. Otherwise, your test results will be misleading and you won’t learn anything new.

Watch out for bias. Avoid confirmation bias at all costs. Don’t make the mistake of including irrelevant data just because it bolsters your results. For example, if you are gathering data about how users are interacting with your product Monday-Friday, don’t include weekend data just because doing so would alter the data and ‘validate’ your hypothesis.

  • Not all failed hypotheses should be treated as losses. Even if you didn’t get the outcome you were hoping for, you may still have improved your product. Let’s say you implemented SSO authentication for premium users, but unfortunately, your free users didn’t end up switching to premium plans. In this case, you still added value to the product by streamlining the login process for paying users.
  • Yes, taking a hypothesis-driven approach to product development is important. But remember, you don’t have to test everything . Use common sense first. For example, if your website copy is confusing and doesn’t portray the value of the product, then you should still strive to replace it with better copy – regardless of how this affects your numbers in the short term.

Wrapping Up

The process of generating and validating product hypotheses is actually pretty straightforward once you’ve got the hang of it. All you need is a valid question or problem, a testable statement, and a method of validation. Sure, hypothesis-driven development requires more of a time commitment than just ‘giving it a go.’ But ultimately, it will help you tune the product to the wants and needs of your customers.

If you share our data-driven approach to product development and engineering, check out our services page to learn more about how we work with our clients!

creating and testing a demand/value hypothesis

  • Mobile App Development Developing scalable and highly customizable digital business products.
  • Digital Product Development Connecting the dots between the product and the user’s needs, perceptions, and feelings.
  • Hire Mobile App Developers Our experienced team specializes in creating world-class mobile apps.
  • UX & UI Design Creating apps that are both visually appealing and user-friendly at the same time.
  • Dedicated Development Team Working diligently on building custom applications from idea to implementation.
  • ASO Services App Store Optimization Optimize your app’s success with leading ASO services.
  • Neobank & Fintech Sharing our in-depth FinTech domain expertise to help you build the next-gen financial app.
  • Healthcare Creating an enjoyable digital experience both for caregivers and patients.
  • Retail & E-Commerce Developing an app idea into a high-ranking app in app stores.
  • Logistics Helping to leverage emerging technologies for the better — without breaking the bank.
  • Loyalty Program Improving customer loyalty and retention using the latest developments in the industry.

creating and testing a demand/value hypothesis

How to Build a List of Hypotheses for Mobile App (Guide for Hypothesis-Driven Development)

4.9 / 5. votes 23

No votes so far! Be the first to rate this post.

creating and testing a demand/value hypothesis

Henn Akimov

Marketer, Startup Advisor

creating and testing a demand/value hypothesis

Ihor Polych

CEO at Devlight

creating and testing a demand/value hypothesis

“Most businesses die because they offer a product that consumers don’t need” — this is a famous saying of Eric Ries , the author of the Lean Startup methodology. So how can a hypothesis-driven development help your project avoid this trap?

The answer is simple — it aims to research the demand for your future product before making a mobile app. So it is worth starting the research by compiling a set of hypotheses about the needs of consumers. That answers the question about what problems and difficulties your future product will help solve.

Forming hypotheses is a creative process, and it is difficult to follow a certain procedure, but some rules still apply. In this article, we will describe an algorithm for creating a set of product hypotheses and their further verification upon user surveys.

What Is Hypothesis-Driven Development ?

A hypothesis-based approach allows product developers to design, test, and refactor a product until it is acceptable to consumers. This methodology involves testing and refining the product based on consumer feedback to verify the assumptions made during the ideation process. The utilization of this approach helps to eliminate any uncertainties in the design phase and leads to a final product that is well-received by users.

Here are some examples of hypotheses for mobile app development from various segments:

  • The behavioral hypothesis demonstrates user behavior under various conditions and what drives people to act in a certain manner;
  • The difficulties users encounter and the justifications for why they regard such challenges as obstacles to their objectives are covered by the problem hypothesis;
  • The motivation hypothesis focuses on the wants of users and the reasons why they are currently ineffective in accomplishing their objectives;
  • The blocker hypothesis reveals the cause of the present ineffective conduct or difficulty.

Why Do We Use a Hypothesis-Driven Development ?

When developing a product, you define your hypotheses, find the fastest ways to test them, and use the results to change your strategy.

creating and testing a demand/value hypothesis

You have a lot of assumptions, to begin with. You predict what users want, what they are looking for, what the design should be, what marketing strategy to use, what architecture will be most effective, and how to monetize the product.

Some hypotheses need to be corrected. You don’t know which ones. CB Insights determined that a lack of market demand was one of the main causes of startup failure. Almost half of these projects spent months, or even years, building a product.

The only way to test a list of hypotheses for the mobile app is to give the product to a potential customer as soon as possible. If you follow this methodology consistently, you will realize that most hypotheses fail. You assume, fail, and have to go back to the beginning each time to test new hypotheses.

creating and testing a demand/value hypothesis

This approach is not an innovation in product development. When you write a book or essay, you spend a lot of time editing and revising. When you write code, you also redo it. Every creative endeavor requires a huge amount of trial and error.

creating and testing a demand/value hypothesis

In this world, the one who detects their own mistakes and corrects them faster becomes the winner. The most important thing is to determine which of your hypotheses is wrong with the help of feedback from real users. Thus, when you’re building a product, writing code, or developing a marketing plan, always ask yourself a few questions:

  • Which hypothesis in the project is the most doubtful?
  • What is the fastest way to check it?

It’s all in App Playbook. Our tried-and-true sequence of 75 tasks has already driven 35M installs, and now it’s your turn to experience the same level of success!

How Does A Hypothesis-Driven Development Look Like in Real Life?

Let’s look at a simple example. Let’s say we choose a project approach (one that sets a task, not puts forward hypotheses) to a service for selling goods. We decide to add a delivery option to it. We decide to hire delivery people, buy them branded clothes, bags, and possibly transport. The development team creates a page where you can enter the delivery address and the desired date. Then we write a service that transfers the order from the store to the delivery person and an application for those delivery people. What happens in the negative scenario? That’s right. We’re losing hundreds of thousands of dollars.

What if we had a hypothesis-driven approach? First of all, we would write hypotheses and confirm that customers, in the first place, need the delivery option. Then we would understand the optimal cost and delivery time to calculate the unit economy. Next, surveys or interviews of users would be conducted to give us an understanding of user needs. Then we would make a fake “delivery” button on the website or in the app to see how many clients would try to use it. 

Of course, this action cannot be used to calculate the exact demand because there are still dozens of ways to kill the conversion after the user’s clicking the button: complicated fill-out forms, poorly available delivery periods, high costs, etc. But, at least, we would understand how many people out of 10 thousand, who saw the button, tried to use it — three or eight thousand. So then, to test the hypothesis in real-life conditions, we would use a ready-made B2B solution rather than develop our delivery feature. 

Moreover, to save the integration time, we would collect orders, put them in a database, and then pass them over to our manager, who would manually issue each delivery through the third-party service web form. What would happen in the worst-case scenario? Nothing too serious. We wouldn’t have wasted hundreds of thousands of dollars and many weeks on development.

To sum up, hypothesis-driven development aims to understand what product feature will bring the greatest value at the moment and test this feature in the simplest possible way. To put it bluntly, try to refute each of your hypotheses as soon as possible. Proving to yourself that an idea is worthless without spending time on its development is morally difficult but very effective from the company’s activities point of view.

A hypothesis-driven approach provides a structured way of consolidating ideas and building hypotheses based on objective criteria. This methodology provides a deep understanding of prioritizing features related to business goals and desired user outcomes.

How to Test the Hypothesis of Product Demand and Value Without Development

Starting development without testing the key hypotheses behind the new product is a widely spread mistake. In this case, you are completely sure of your idea and see no point in testing it but begin the development process immediately.

The second most common hypothesis-driven development mistake is to look for confirmation of a hypothesis instead of testing it. Often, demand or value testing becomes a formal step. The decision is not based on received data but on initial settings and startup owners’ prejudices. This cognitive distortion happens for several reasons:

  • Commitment to an idea blocks critical thinking (typical of startups);
  • The bureaucratic apparatus perceives testing hypotheses for mobile app development as that part of the project development process, which is inevitably followed by implementation, regardless of the results of the test (typical of corporations). Even if all the early tests show that the product in its current form does not stand a chance, it still goes into development;
  • The third mistake is testing unimportant things. Instead of testing key risks (demand and value), secondary elements related to subjective perception (appearance, non-core functions, etc.) are tested. As a result, time is wasted, and the hypothesis-testing process itself is devalued.

Testing the Demand Hypothesis for a New Product

The demand hypothesis is one of the riskiest assumptions behind a new product. This hypothesis assumes that the potential audience is interested in solving a certain problem. The demand hypothesis is also called the need hypothesis or the problem hypothesis. 

It is necessary to study the target audience and its tasks in order to check the demand, sometimes to sell a product that has not been created:

  • The most common way to test the demand is to create a landing page with a detailed description and illustrations of the product and show it to potential buyers;
  • In some cases, you don’t need to create your site— just place an ad on a platform attracting the audience of potential customers for the product;
  • The demand for some products is difficult to check with a landing page or an announcement on social networks. Especially if the sales process includes a long conversation, a call, and sometimes a meeting with the buyer. You can use targeted advertising and personal communication in such situations. Again, without yet creating an actual product;
  • If deciding to buy your product requires minimal experience interacting with it, you can offer customers a shell without filling;
  • One of the easiest and most effective ways to test demand without development is to show users videos simulating how the product works. This way, you can demonstrate its capabilities, interface, design, and situations where the product will be useful.

Testing the New Product Value Hypothesis

Once the demand list of hypotheses for the mobile app is validated and you know that the product solves the desired problem among potential buyers, the next key risk is the value. The value hypothesis assumes that the product’s intended implementation will bring customers real value. It usually means that the product will solve users’ problems more effectively than alternatives available on the market. Otherwise, users will have no motivation to switch from one solution to another:

  • Allowing users to try something as close to the future product is the most proper way to test a value hypothesis. This can be done with the help of third-party services that reproduce complex functions and automate their work without writing your code;
  • Alternatively, to check value hypotheses for mobile app development without development, you can reproduce the process of the system in manual mode;
  • The third method lies in value validation through the prototype. Usability testing of prototypes allows you to see the process of using the product, and subsequent interviews give a fairly accurate understanding of the presence or absence of the value of the solution being studied.

App Playbook is the ultimate solution. With a bulletproof sequence of 75 App Building Tasks and real-life cases that have already driven 35M app installs, your app’s success is guaranteed!

How to Build and Test List of Hypotheses for Mobile App

The HADI (Hypothesis – Action – Data – Insights) methodology is the simplest algorithm for cyclical testing of ideas – from hypothesis through action to data and conclusions.

The hypothesis-driven development management cycle begins with formulating a hypothesis according to the “if” and “then” principles. In the second stage, it is necessary to carry out several works to launch the experiment (Action), then collect data for a given period (Data), and at the end, make an unambiguous conclusion about whether the hypothesis was successful, what and how it can be improved by launching the next cycle of hypothesis testing (Insights).

creating and testing a demand/value hypothesis

Step 1. Forming a Hypothesis

Here you formulate what you want to know. What problem are you trying to solve? Determine which product level you should test:

  • Value level —  test the problem your product is supposed to solve. Understand whether it is worth solving;
  • Feature level is a functionality through which the user quickly realizes the value of our product;
  • Design level means the design and visualization. How does your functionality work in terms of user experience? Simply put, will people intuitively figure out how to manage, where to click, and what to do with your product?
  • The feasibility level of hypothesis-driven development is about the technical implementation of everything you have created.

The hypothesis is based on the principle “If…, then…”. (if…then)

You can also prioritize the hypotheses to be tested. Think about what might have the biggest impact on your users’ needs and prioritize accordingly. You can also use the ICE Score framework, which includes these  three elements:

  • Confidence;

ICE is calculated as follows:

creating and testing a demand/value hypothesis

Of course, this is only one option — several formulas you can choose from existing. However, remember that the formula for all hypotheses you compare should stay the same, and your ICE should have the same rating range — either 1 to 10, 1 to 100, or another scale (determine it in the beginning).

Impact estimates how much an idea will positively affect the metric you’re trying to improve. To determine the impact, we ask the following questions: How effective will it be? How much will this affect the metrics (conversion, retention, LTV, MAU, DAU)?

Confidence shows how much you trust the impact estimates and ease of implementation. To determine this hypothesis-driven development metric, you need to answer the question: How confident are you that this feature will lead to the improvement described in Impact and be as easy to implement as described in Ease?

Ease of implementation is an estimate of how much effort and resources are required to implement this hypothesis. The easier the task, the higher the number. To determine the ease of implementation, you need to answer the question: How long will testing these hypotheses for mobile app development or developing this feature take? How many people will be involved? Consider the work of the development, design, and marketing departments.

Step 2. Performing the Action

At the beginning of each cycle, we take several hypotheses and start testing them using the next methods:

  • A/B Testing or Split Testing

In such testing, the main thing is clearly defining the sample or its size. This is important so that the results are as realistic and statistically significant as possible. We recommend conducting split testing with at least ten thousand active monthly users. If your audience still needs to be bigger, it is better to use other tools.

  • Quantitative User Survey

Use special services facilitating survey creation and implementation. For example, Survey Monkey. Such services allow you to select the desired audience and ask them questions. With the free plan, you can create questionnaires with up to 10 questions. The link to the questionnaire can be placed on the website or social networks.

  • Qualitative Research or Customer Development

This research type is a direct conversation with consumers or a certain group of potential product consumers. Such hypothesis-driven development interviews can be divided into two groups:

  • Usability— will help to understand whether users can use your product at all to solve their tasks with its help and achieve their desired goals;
  • Discovery – delves into the state, problems, and perceptions of users of a certain group in detail. In such interviews, we usually ask questions like “Who? How? Why? Where?”

How many such interviews do you need to test the hypothesis? We usually start with five. Then we continue until people stop giving new answers. You can stop as soon as the information starts to repeat itself. For hypotheses testing small product changes, 5-7 interviews may be enough. For the launch of a completely new product — 50-70 interviews.

Step 3. Data Analytics

At this stage, we collect data from our research. You should have a backlog of your hypotheses for mobile app development prioritized according to certain criteria to help you at various stages of development. Approach all feature development from the perspective of hypotheses. A good indicator is when you have two states: one in which an experiment is being planned to test the hypothesis and another in which functional validation is ongoing and data is being measured. 

Then, when your experiment is over, you can identify the hypothesis as being supported, refuted, or even abandoned if you decide to call it quits due to the results. You may ensure that you reach any necessary pivots as early as possible and prevent investing in needless work by always tackling the highest-risk hypotheses first.

Step 4. Insights

This stage can also be called interpretation. First, you should analyze whether your list of hypotheses for the mobile app was confirmed (worked). Whether a theory is confirmed or refuted, the process itself offers a chance to learn. Even if you cannot support the hypothesis, the result may offer insightful information that you might use for a different hypothesis.

Now that some of your hypotheses have been supported, you can proceed to development. But even once the product is released, testing must continue. In addition, you should be alert since certain aspects can need to change due to client needs, market trends , regional economics, and other factors.

The ultimate founder’s checklist of 75 tasks to build, launch & scale your app 3-5x faster systematically. Proven by 35M of app installs!

List of Hypotheses for Mobile App: Example

ABC (name changed) is the largest provider of microcredits in Ukraine. They have no physical branches — their services are fully digital and offered online. However, ABC has thousands of contented customers and devoted staff members. The figures speak for themselves:

  • During the first half of 2021, ABC’s net income was estimated at 44 million dollars (the biggest number among competitors );
  • The company has a net profit of $1.4 million;
  • ABC’s team consists of more than 700 workers;
  • They are frequently used by 1.8 m Ukrainians;
  • 6,000,000 loans were made in the service;
  • Their total issued money circulation is $1 billion.

It is a sizable, contemporary, and well-run business that turned to us to help them with diversification and new ways of development. The customer’s goals were growing the company, diversifying the line of goods, and breaking into a new market. Devlight used this data when forming the list of hypotheses for the mobile app.

Internal Discussions and Hypotheses Forming

First, we gained a profound knowledge of the ABC team’s technology, product, capabilities, vision, and passion through our meetings. We saw that we could accomplish our ultimate objective thanks to our significant experience working with neo-banks and our business knowledge. 

The Ukrainian market was solely focused on loans as of 2021. Users voluntarily took out loans for various purposes, including small household and personal expenses, buying vehicles, and starting businesses. Loans were a common practice. It was typical, a common occurrence that clients were fully aware of.

However, the market’s offerings fell short of users’ needs. They were one-dimensional and impersonal. We concluded that this vulnerability is exactly where we can compete. Depending on their credit score, we can provide different consumers with flexible credit limits or high limits with a longer grace period. For instance, the market had nothing like a big credit limit of UAH 100,000 for 100 days. As a result of ABC’s significant experience in the credit industry, we can accomplish this relatively effortlessly.

One of the advantages of ABC’s business model was its capacity to deal with credit scores and potential hazards properly. These advantages enabled us to formulate the premise of a flexible product credit engine, which we could then use to develop the product’s key competitive advantage . The primary target audience had to be used to test this idea. It would serve as our foundation for hypotheses for mobile app development. 

creating and testing a demand/value hypothesis

Do you keep failing to form hypotheses for mobile app development? Devlight will be happy to point you in the right direction. Be sure to contact us!

Hypothesis-Driven Development: Summary

Do not worry that your hypotheses will be incorrect. Your objective is not to convince everyone that you are correct. Your objective is to establish a prosperous business. The hypotheses are merely a tool to get you there, so the more of them you debunk — the better. Finally, keep in mind that a hypothesis-driven development:

  • is about a sequence of tests to support or refute a theory. Determine value!
  • provides a quantifiable result and promotes ongoing learning;
  • enables the user — a critical stakeholder — to provide continuous feedback to comprehend the unknowns better;
  • enables us to comprehend the changing environment and gradually expose the value.

Apps that were developed based on tested hypotheses have a big and advantageous impact on a company’s business objectives. Utilizing data that is closely related to the company’s goal guarantees that customers’ needs are prioritized. 

Hypothesis-Driven Development: FAQ

How to correctly formulate hypotheses for a mobile application.

A correct hypothesis:

  • predicts the connection and result;
  • is brief and simple;
  • is formed without any ambiguity or presumptions;
  • contains measurable outcomes that can be tested;
  • is specific and pertinent to the research subject or issue.

“If these modifications are made to a particular independent variable, then we will notice a change in a specific dependent variable” can be the fundamental format. Here is an example of a basic hypothesis: “Food apps with vibrant designs are used more frequently than those made in a dull color palette.”

How to Build a List of Hypotheses for a Mobile App?

First, you brainstorm different assumptions based on your product-specific, request, or expected results. Then, you may group the hypotheses according to a certain non-changing criterion: their common problem, the complexity of the further experiment needed, or their overall time span. 

Alternatively, you may group your findings after conducting the experiments and present the hypotheses upon their adequacy towards the examined issue.

What Are the Benefits of Hypothesis-Driven Development?

Hypothesis-driven development is a methodology that involves creating a hypothesis, devising experiments to validate it, and utilizing data to steer product development decisions. The advantages of this approach are numerous:

Accelerated time-to-market: By gathering data and examining hypotheses, development teams can make informed decisions and improve the speed with which products are brought to market.

Enhanced product quality: Hypothesis-driven development helps teams identify and rectify potential issues early in the development process, resulting in higher-quality products.

Increased user satisfaction: By focusing on user needs and verifying hypotheses with real users, development teams can create products that better align with user preferences, leading to heightened user satisfaction.

Optimal resource utilization: Hypothesis-driven development enables teams to concentrate on the most promising ideas, resulting in better utilization of their time and resources.

Decreased risk: By evaluating hypotheses and gathering data, development teams can identify and address potential issues early, reducing the likelihood of launching a product that fails to meet user requirements or fails to achieve its goals. The list of hypotheses for the mobile app is a priceless repository for organizational data.

How to Create a Prototype for a Mobile App

GOT A PROJECT IN MIND?

Privacy overview.

How to Generate and Validate Product Hypotheses

creating and testing a demand/value hypothesis

Every product owner knows that it takes effort to build something that'll cater to user needs. You'll have to make many tough calls if you wish to grow the company and evolve the product so it delivers more value. But how do you decide what to change in the product, your marketing strategy, or the overall direction to succeed? And how do you make a product that truly resonates with your target audience?

There are many unknowns in business, so many fundamental decisions start from a simple "what if?". But they can't be based on guesses, as you need some proof to fill in the blanks reasonably.

Because there's no universal recipe for successfully building a product, teams collect data, do research, study the dynamics, and generate hypotheses according to the given facts. They then take corresponding actions to find out whether they were right or wrong, make conclusions, and most likely restart the process again.

On this page, we thoroughly inspect product hypotheses. We'll go over what they are, how to create hypothesis statements and validate them, and what goes after this step.

What Is a Hypothesis in Product Management?

A hypothesis in product development and product management is a statement or assumption about the product, planned feature, market, or customer (e.g., their needs, behavior, or expectations) that you can put to the test, evaluate, and base your further decisions on . This may, for instance, regard the upcoming product changes as well as the impact they can result in.

A hypothesis implies that there is limited knowledge. Hence, the teams need to undergo testing activities to validate their ideas and confirm whether they are true or false.

What Is a Product Hypothesis?

Hypotheses guide the product development process and may point at important findings to help build a better product that'll serve user needs. In essence, teams create hypothesis statements in an attempt to improve the offering, boost engagement, increase revenue, find product-market fit quicker, or for other business-related reasons.

It's sort of like an experiment with trial and error, yet, it is data-driven and should be unbiased . This means that teams don't make assumptions out of the blue. Instead, they turn to the collected data, conducted market research , and factual information, which helps avoid completely missing the mark. The obtained results are then carefully analyzed and may influence decision-making.

Such experiments backed by data and analysis are an integral aspect of successful product development and allow startups or businesses to dodge costly startup mistakes .

‍ When do teams create hypothesis statements and validate them? To some extent, hypothesis testing is an ongoing process to work on constantly. It may occur during various product development life cycle stages, from early phases like initiation to late ones like scaling.

In any event, the key here is learning how to generate hypothesis statements and validate them effectively. We'll go over this in more detail later on.

Idea vs. Hypothesis Compared

You might be wondering whether ideas and hypotheses are the same thing. Well, there are a few distinctions.

What's the difference between an idea and a hypothesis?

An idea is simply a suggested proposal. Say, a teammate comes up with something you can bring to life during a brainstorming session or pitches in a suggestion like "How about we shorten the checkout process?". You can jot down such ideas and then consider working on them if they'll truly make a difference and improve the product, strategy, or result in other business benefits. Ideas may thus be used as the hypothesis foundation when you decide to prove a concept.

A hypothesis is the next step, when an idea gets wrapped with specifics to become an assumption that may be tested. As such, you can refine the idea by adding details to it. The previously mentioned idea can be worded into a product hypothesis statement like: "The cart abandonment rate is high, and many users flee at checkout. But if we shorten the checkout process by cutting down the number of steps to only two and get rid of four excessive fields, we'll simplify the user journey, boost satisfaction, and may get up to 15% more completed orders".

A hypothesis is something you can test in an attempt to reach a certain goal. Testing isn't obligatory in this scenario, of course, but the idea may be tested if you weigh the pros and cons and decide that the required effort is worth a try. We'll explain how to create hypothesis statements next.

creating and testing a demand/value hypothesis

How to Generate a Hypothesis for a Product

The last thing those developing a product want is to invest time and effort into something that won't bring any visible results, fall short of customer expectations, or won't live up to their needs. Therefore, to increase the chances of achieving a successful outcome and product-led growth , teams may need to revisit their product development approach by optimizing one of the starting points of the process: learning to make reasonable product hypotheses.

If the entire procedure is structured, this may assist you during such stages as the discovery phase and raise the odds of reaching your product goals and setting your business up for success. Yet, what's the entire process like?

How hypothesis generation and validation works

  • It all starts with identifying an existing problem . Is there a product area that's experiencing a downfall, a visible trend, or a market gap? Are users often complaining about something in their feedback? Or is there something you're willing to change (say, if you aim to get more profit, increase engagement, optimize a process, expand to a new market, or reach your OKRs and KPIs faster)?
  • Teams then need to work on formulating a hypothesis . They put the statement into concise and short wording that describes what is expected to achieve. Importantly, it has to be relevant, actionable, backed by data, and without generalizations.
  • Next, they have to test the hypothesis by running experiments to validate it (for instance, via A/B or multivariate testing, prototyping, feedback collection, or other ways).
  • Then, the obtained results of the test must be analyzed . Did one element or page version outperform the other? Depending on what you're testing, you can look into various merits or product performance metrics (such as the click rate, bounce rate, or the number of sign-ups) to assess whether your prediction was correct.
  • Finally, the teams can make conclusions that could lead to data-driven decisions. For example, they can make corresponding changes or roll back a step.

How Else Can You Generate Product Hypotheses?

Such processes imply sharing ideas when a problem is spotted by digging deep into facts and studying the possible risks, goals, benefits, and outcomes. You may apply various MVP tools like (FigJam, Notion, or Miro) that were designed to simplify brainstorming sessions, systemize pitched suggestions, and keep everyone organized without losing any ideas.

Predictive product analysis can also be integrated into this process, leveraging data and insights to anticipate market trends and consumer preferences, thus enhancing decision-making and product development strategies. This approach fosters a more proactive and informed approach to innovation, ensuring products are not only relevant but also resonate with the target audience, ultimately increasing their chances of success in the market.

Besides, you can settle on one of the many frameworks that facilitate decision-making processes , ideation phases, or feature prioritization . Such frameworks are best applicable if you need to test your assumptions and structure the validation process. These are a few common ones if you're looking toward a systematic approach:

  • Business Model Canvas (used to establish the foundation of the business model and helps find answers to vitals like your value proposition, finding the right customer segment, or the ways to make revenue);
  • Lean Startup framework (the lean startup framework uses a diagram-like format for capturing major processes and can be handy for testing various hypotheses like how much value a product brings or assumptions on personas, the problem, growth, etc.);
  • Design Thinking Process (is all about interactive learning and involves getting an in-depth understanding of the customer needs and pain points, which can be formulated into hypotheses followed by simple prototypes and tests).

Need a hand with product development?

Upsilon's team of pros is ready to share our expertise in building tech products.

creating and testing a demand/value hypothesis

How to Make a Hypothesis Statement for a Product

Once you've indicated the addressable problem or opportunity and broken down the issue in focus, you need to work on formulating the hypotheses and associated tasks. By the way, it works the same way if you want to prove that something will be false (a.k.a null hypothesis).

If you're unsure how to write a hypothesis statement, let's explore the essential steps that'll set you on the right track.

Making a Product Hypothesis Statement

Step 1: Allocate the Variable Components

Product hypotheses are generally different for each case, so begin by pinpointing the major variables, i.e., the cause and effect . You'll need to outline what you think is supposed to happen if a change or action gets implemented.

Put simply, the "cause" is what you're planning to change, and the "effect" is what will indicate whether the change is bringing in the expected results. Falling back on the example we brought up earlier, the ineffective checkout process can be the cause, while the increased percentage of completed orders is the metric that'll show the effect.

Make sure to also note such vital points as:

  • what the problem and solution are;
  • what are the benefits or the expected impact/successful outcome;
  • which user group is affected;
  • what are the risks;
  • what kind of experiments can help test the hypothesis;
  • what can measure whether you were right or wrong.

Step 2: Ensure the Connection Is Specific and Logical

Mind that generic connections that lack specifics will get you nowhere. So if you're thinking about how to word a hypothesis statement, make sure that the cause and effect include clear reasons and a logical dependency .

Think about what can be the precise and link showing why A affects B. In our checkout example, it could be: fewer steps in the checkout and the removed excessive fields will speed up the process, help avoid confusion, irritate users less, and lead to more completed orders. That's much more explicit than just stating the fact that the checkout needs to be changed to get more completed orders.

Step 3: Decide on the Data You'll Collect

Certainly, multiple things can be used to measure the effect. Therefore, you need to choose the optimal metrics and validation criteria that'll best envision if you're moving in the right direction.

If you need a tip on how to create hypothesis statements that won't result in a waste of time, try to avoid vagueness and be as specific as you can when selecting what can best measure and assess the results of your hypothesis test. The criteria must be measurable and tied to the hypotheses . This can be a realistic percentage or number (say, you expect a 15% increase in completed orders or 2x fewer cart abandonment cases during the checkout phase).

Once again, if you're not realistic, then you might end up misinterpreting the results. Remember that sometimes an increase that's even as little as 2% can make a huge difference, so why make 50% the merit if it's not achievable in the first place?

Step 4: Settle on the Sequence

It's quite common that you'll end up with multiple product hypotheses. Some are more important than others, of course, and some will require more effort and input.

Therefore, just as with the features on your product development roadmap , prioritize your hypotheses according to their impact and importance. Then, group and order them, especially if the results of some hypotheses influence others on your list.

Product Hypothesis Examples

To demonstrate how to formulate your assumptions clearly, here are several more apart from the example of a hypothesis statement given above:

  • Adding a wishlist feature to the cart with the possibility to send a gift hint to friends via email will increase the likelihood of making a sale and bring in additional sign-ups.
  • Placing a limited-time promo code banner stripe on the home page will increase the number of sales in March.
  • Moving up the call to action element on the landing page and changing the button text will increase the click-through rate twice.
  • By highlighting a new way to use the product, we'll target a niche customer segment (i.e., single parents under 30) and acquire 5% more leads. 

creating and testing a demand/value hypothesis

How to Validate Hypothesis Statements: The Process Explained

There are multiple options when it comes to validating hypothesis statements. To get appropriate results, you have to come up with the right experiment that'll help you test the hypothesis. You'll need a control group or people who represent your target audience segments or groups to participate (otherwise, your results might not be accurate).

‍ What can serve as the experiment you may run? Experiments may take tons of different forms, and you'll need to choose the one that clicks best with your hypothesis goals (and your available resources, of course). The same goes for how long you'll have to carry out the test (say, a time period of two months or as little as two weeks). Here are several to get you started.

Experiments for product hypothesis validation

Feedback and User Testing

Talking to users, potential customers, or members of your own online startup community can be another way to test your hypotheses. You may use surveys, questionnaires, or opt for more extensive interviews to validate hypothesis statements and find out what people think. This assumption validation approach involves your existing or potential users and might require some additional time, but can bring you many insights.

Conduct A/B or Multivariate Tests

One of the experiments you may develop involves making more than one version of an element or page to see which option resonates with the users more. As such, you can have a call to action block with different wording or play around with the colors, imagery, visuals, and other things.

To run such split experiments, you can apply tools like VWO that allows to easily construct alternative designs and split what your users see (e.g., one half of the users will see version one, while the other half will see version two). You can track various metrics and apply heatmaps, click maps, and screen recordings to learn more about user response and behavior. Mind, though, that the key to such tests is to get as many users as you can give the tests time. Don't jump to conclusions too soon or if very few people participated in your experiment.

Build Prototypes and Fake Doors

Demos and clickable prototypes can be a great way to save time and money on costly feature or product development. A prototype also allows you to refine the design. However, they can also serve as experiments for validating hypotheses, collecting data, and getting feedback.

For instance, if you have a new feature in mind and want to ensure there is interest, you can utilize such MVP types as fake doors . Make a short demo recording of the feature and place it on your landing page to track interest or test how many people sign up.

Usability Testing

Similarly, you can run experiments to observe how users interact with the feature, page, product, etc. Usually, such experiments are held on prototype testing platforms with a focus group representing your target visitors. By showing a prototype or early version of the design to users, you can view how people use the solution, where they face problems, or what they don't understand. This may be very helpful if you have hypotheses regarding redesigns and user experience improvements before you move on from prototype to MVP development.

You can even take it a few steps further and build a barebone feature version that people can really interact with, yet you'll be the one behind the curtain to make it happen. There were many MVP examples when companies applied Wizard of Oz or concierge MVPs to validate their hypotheses.

Or you can actually develop some functionality but release it for only a limited number of people to see. This is referred to as a feature flag , which can show really specific results but is effort-intensive. 

creating and testing a demand/value hypothesis

What Comes After Hypothesis Validation?

Analysis is what you move on to once you've run the experiment. This is the time to review the collected data, metrics, and feedback to validate (or invalidate) the hypothesis.

You have to evaluate the experiment's results to determine whether your product hypotheses were valid or not. For example, if you were testing two versions of an element design, color scheme, or copy, look into which one performed best.

It is crucial to be certain that you have enough data to draw conclusions, though, and that it's accurate and unbiased . Because if you don't, this may be a sign that your experiment needs to be run for some additional time, be altered, or held once again. You won't want to make a solid decision based on uncertain or misleading results, right?

What happens after hypothesis validation

  • If the hypothesis was supported , proceed to making corresponding changes (such as implementing a new feature, changing the design, rephrasing your copy, etc.). Remember that your aim was to learn and iterate to improve.
  • If your hypothesis was proven false , think of it as a valuable learning experience. The main goal is to learn from the results and be able to adjust your processes accordingly. Dig deep to find out what went wrong, look for patterns and things that may have skewed the results. But if all signs show that you were wrong with your hypothesis, accept this outcome as a fact, and move on. This can help you make conclusions on how to better formulate your product hypotheses next time. Don't be too judgemental, though, as a failed experiment might only mean that you need to improve the current hypothesis, revise it, or create a new one based on the results of this experiment, and run the process once more.

On another note, make sure to record your hypotheses and experiment results . Some companies use CRMs to jot down the key findings, while others use something as simple as Google Docs. Either way, this can be your single source of truth that can help you avoid running the same experiments or allow you to compare results over time.

Have doubts about how to bring your product to life?

Upsilon's team of pros can help you build a product most optimally.

Final Thoughts on Product Hypotheses

The hypothesis-driven approach in product development is a great way to avoid uncalled-for risks and pricey mistakes. You can back up your assumptions with facts, observe your target audience's reactions, and be more certain that this move will deliver value.

However, this only makes sense if the validation of hypothesis statements is backed by relevant data that'll allow you to determine whether the hypothesis is valid or not. By doing so, you can be certain that you're developing and testing hypotheses to accelerate your product management and avoiding decisions based on guesswork.

Certainly, a failed experiment may bring you just as much knowledge and findings as one that succeeds. Teams have to learn from their mistakes, boost their hypothesis generation and testing knowledge, and make improvements according to the results of their experiments. This is an ongoing process, of course, as no product can grow if it isn't iterated and improved.

If you're only planning to or are currently building a product, Upsilon can lend you a helping hand. Our team has years of experience providing product development services for growth-stage startups and building MVPs for early-stage businesses , so you can use our expertise and knowledge to dodge many mistakes. Don't be shy to contact us to discuss your needs! 

creating and testing a demand/value hypothesis

Integrating Third Party Apps into Your Product: Benefits and Best Practices

Information Architecture Design: A Product Discovery Step

Information Architecture Design: A Product Discovery Step

How to Prototype a Product: Steps, Tips, and Best Practices

How to Prototype a Product: Steps, Tips, and Best Practices

Never miss an update.

creating and testing a demand/value hypothesis

IMAGES

  1. How to Optimize the Value of Hypothesis Testing

    creating and testing a demand/value hypothesis

  2. What is Hypothesis Testing? Types and Methods

    creating and testing a demand/value hypothesis

  3. Your Guide to Master Hypothesis Testing in Statistics

    creating and testing a demand/value hypothesis

  4. hypothesis testing examples

    creating and testing a demand/value hypothesis

  5. Hypothesis Testing

    creating and testing a demand/value hypothesis

  6. Hypothesis Testing

    creating and testing a demand/value hypothesis

VIDEO

  1. Concept of Hypothesis

  2. Multi Hypothesis Tracking in a Graph Based World Model for Knowledge Driven Active Perception

  3. Hypothesis Testing

  4. Hypothesis testing in Large Samples-V: Sample and the Population Standard Deviations

  5. Null and Alternative Hypotheses in Statistics (7-8)

  6. #83

COMMENTS

  1. Tips to Create and Test a Value Hypothesis: A Step-by-Step Guide

    When developing a value hypothesis, you're attempting to validate assumptions about your product's value to customers. Here are concise tips to help you with this process: 1. Understanding Your Market and Customers. Before formulating a hypothesis, you need a deep understanding of your market and potential customers.

  2. Hypothesis Testing

    Table of contents. Step 1: State your null and alternate hypothesis. Step 2: Collect data. Step 3: Perform a statistical test. Step 4: Decide whether to reject or fail to reject your null hypothesis. Step 5: Present your findings. Other interesting articles. Frequently asked questions about hypothesis testing.

  3. Hypothesis-Driven Development Course by University of Virginia

    The JTBD Hypothesis • 3 minutes. The Demand Hypothesis • 2 minutes. The Usability Hypothesis • 2 minutes. The Collaboration Hypothesis • 2 minutes. The Functional Hypothesis • 2 minutes. Driving to Value with Your Persona & JTBD Hypothesis • 2 minutes. Example Personas and Jobs-to-be-Done • 4 minutes.

  4. A Beginner's Guide to Hypothesis Testing in Business

    3. One-Sided vs. Two-Sided Testing. When it's time to test your hypothesis, it's important to leverage the correct testing method. The two most common hypothesis testing methods are one-sided and two-sided tests, or one-tailed and two-tailed tests, respectively. Typically, you'd leverage a one-sided test when you have a strong conviction ...

  5. Value Hypothesis 101: A Product Manager's Guide

    4 Tips to Create and Test a Verifiable Value Hypothesis. A verifiable hypothesis needs to be based on a logical structure, customer feedback data, and objective safeguards like creating a minimum viable product. Validating your value significantly reduces risk. You can prevent wasting money, time, and resources by verifying your hypothesis in ...

  6. 9.2: Hypothesis Testing

    Null and Alternative Hypotheses. The actual test begins by considering two hypotheses.They are called the null hypothesis and the alternative hypothesis.These hypotheses contain opposing viewpoints. \(H_0\): The null hypothesis: It is a statement of no difference between the variables—they are not related. This can often be considered the status quo and as a result if you cannot accept the ...

  7. PDF Hypothesis-Driven Development

    2. A demand/value hypothesis 3. An experiment to test your hypothesis General Instructions To complete your work, you can use this Microsoft Word template or you may prefer to use the Google Docs version which is here: Hypothesis-Driven Development Assignment.

  8. PDF Introduction to Hypothesis Testing

    8.2 FOUR STEPS TO HYPOTHESIS TESTING The goal of hypothesis testing is to determine the likelihood that a population parameter, such as the mean, is likely to be true. In this section, we describe the four steps of hypothesis testing that were briefly introduced in Section 8.1: Step 1: State the hypotheses. Step 2: Set the criteria for a decision.

  9. 7.1: Introduction to Hypothesis Testing

    A statistician will make a decision about these claims. This process is called " hypothesis testing ". A hypothesis test involves collecting data from a sample and evaluating the data. Then, the statistician makes a decision as to whether or not there is sufficient evidence, based upon analyses of the data, to reject the null hypothesis.

  10. How to test your idea: start with the most critical hypotheses

    Step 0 - think (& hypothesize) Shape your idea (product, tech, market opportunity, etc.) into an attractive customer value proposition and prototype a potential profitable and scalable business model. Use the Value Proposition & Business Model Canvas to do this. Then ask: What are the critical assumptions and hypotheses that need to be true for ...

  11. Hypothesis Testing

    It is the total probability of achieving a value so rare and even rarer. It is the area under the normal curve beyond the P-Value mark. This P-Value is calculated using the Z score we just found. Each Z-score has a corresponding P-Value. This can be found using any statistical software like R or even from the Z-Table.

  12. Developing and Testing Hypotheses

    Testing Research Hypotheses. The purpose of statistical hypothesis testing is to use a sample to draw inferences about a population. Testing research hypotheses requires a number of steps: Step 1. Define your research hypothesis. The first step in any hypothesis testing is to identify your hypothesis, which you will then go on to test.

  13. 8.1: Steps in Hypothesis Testing

    Figure 8.1.1 8.1. 1: You can use a hypothesis test to decide if a dog breeder's claim that every Dalmatian has 35 spots is statistically sound. (Credit: Robert Neff) A statistician will make a decision about these claims. This process is called "hypothesis testing." A hypothesis test involves collecting data from a sample and evaluating the data.

  14. Hypothesis Testing

    P-value: The P-value tells us how likely we would get our observed results (or something more extreme) if the null hypothesis were true. It's a value between 0 and 1. - A smaller P-value (typically below 0.05) means that the observation is rare under the null hypothesis, so we might reject the null hypothesis.

  15. Value Hypothesis & Growth Hypothesis: lean startup validation

    The value hypothesis and the growth hypothesis - are two ways to validate your idea. "To grow a successful business, validate your idea with customers" - Chad Boyda. In Eric Rie's book ' The Lean Startup', he identifies two different types of hypotheses that entrepreneurs can use to validate their startup idea - the growth ...

  16. Hypothesis-Driven Development (Practitioner's Guide)

    A template for creating motivation/demand experiments. To recap, what's a Right Solution hypothesis for testing demand? The core hypothesis is that you have a value proposition that's better enough than the target persona's current alternatives that you're going to acquire customers.

  17. A Guide to Product Hypothesis Testing

    A/B Testing. One of the most common use cases to achieve hypothesis validation is randomized A/B testing, in which a change or feature is released at random to one-half of users (A) and withheld from the other half (B). Returning to the hypothesis of bigger product images improving conversion on Amazon, one-half of users will be shown the ...

  18. Hypothesis-Driven Development

    Course Introduction • 4 minutes • Preview module. Hypotheses-Driven Development & Your Product Pipeline • 7 minutes. Introducing Example Company: HVAC in a Hurry • 1 minute. Driving Outcomes With Your Product Pipeline • 7 minutes. The Persona Hypothesis • 3 minutes. The JTBD Hypothesis • 3 minutes.

  19. Good Product Hypotheses: How to Write and Test

    3. Set validation criteria. First, build some confirmation criteria into your statement. Think in terms of percentages (e.g. increase/decrease by 5%) and choose a relevant product metric to track e.g. activation rate if your hypothesis relates to onboarding.

  20. Guide for Hypothesis-Driven Development: How to Form a List of

    The third mistake is testing unimportant things. Instead of testing key risks (demand and value), secondary elements related to subjective perception (appearance, non-core functions, etc.) are tested. As a result, time is wasted, and the hypothesis-testing process itself is devalued. Testing the Demand Hypothesis for a New Product

  21. PDF Hypothesis-Driven Development Assignment

    Hypothesis-Driven Development. Part 1: Formulate a Positioning Statement. For [Managers and Data Scientists] who [want to analyze and visualize data easily and quickly without coding], the [Tableau] is a [data visualization tool] that [allows the users to develop interactive dashboards on-the-go]. Unlike [R Shiny], our product [allows users to ...

  22. Product Hypotheses: How to Generate and Validate Them

    Step 1: Allocate the Variable Components. Product hypotheses are generally different for each case, so begin by pinpointing the major variables, i.e., the cause and effect. You'll need to outline what you think is supposed to happen if a change or action gets implemented.

  23. Hypothesis-Driven Development- Peer-Reviewed Assignment (Coursera

    Submitting Your Assignment. Appendix 1: Reference Example. Part 1: Positioning Statement. Intro Note. Positioning Statement. Part 2: Sketch a Demand/Value Hypothesis. Part 3: Sketch Three Experiments via MVP. Demand/Value Hypothesis. Experiment Ideas.