Pricing: Using the Right Method
Presented by David Lyon, Principal at Aurora Market Modeling, LLC, this workshop from the Sawtooth Software Conference presented an overview of commonly used pricing research. The workshop was organized in terms of direct questioning techniques and trade-off methods.
Direct Questioning Techniques: Some Decent Approaches, Some Terrible, Many Over-Used
Willingness to Pay: “How much are you willing to pay for this?”. At best, for a radically new product, gets you near the ballpark. Don’t pre-list answer choices – wipes out upside possibility. Plot % willing to pay at a certain price. Overall, pretty weak technique.
Monadic Designs: split sample into groups and present different price to each. Best to add a buy-response question (“Would you buy it?”, or better yet, a less-variable measure of purchase intent such as an intent scale or allocation/likelihood). Use large samples and match cells carefully.
Sequential Monadic: ask initial, fully-disguised monadic question, then follow-up with “What about this price?”, “What about that price?”. Sometimes done low to high price, or high to low. Problem = no way to disguise focus on price in follow-ups, and results in consistent over-estimation of price sensitivity = not realistic. Huge biases here.
Gabor-Granger: A version of a monadic design that randomizes the levels of price shown to respondents based on their responses to a randomly-assigned starting point price. Provides a nice option for a randomized, experimental design that needs less total sample size.
Van Westendorp Price Sensitivity Meter: four questions: At what price would you find the product… “too cheap”, “cheap”, “expensive”, “too expensive”? Curve crossing analysis shown to be unrealistic; try plotting % of respondents who fall in the “normal” range (between cheap and expensive) against prices. Ok for early exploration; view with skepticism.
Newton-Miller-Smith variation of Van West: add 2 questions: at [cheap price], likelihood to buy?, at [expensive price], ditto. Translate likelihood scale into purchase probability, average probability curves over all respondents. Problem = most of us can’t do translation to probabilities because we lack the industry data.
Trade-Off Techniques:
Ratings-Based Full Profile Conjoint: present profiles, have respondent rate or rank them. Problem = systematically underestimates price sensitivity. More concrete attributes like price are under predicted, whereas more emotionally laden attributes are over estimated.
Price-Only Choice-Models: allows showing different prices for different brands, allows different price utilities for different brands, and avoids systematic underestimation of price effects. Think of each product’s price as a separate “attribute” in the conjoint sense. Can fractionalize the design so each respondent does not need to do the whole design. Usually modeled at aggregate level, but using HB would be better. Issue – cannot simulate with products deleted from or added to the basic set we designed around. Pretty face valid that we are testing price, but experience shows that price sensitivity is realistic.
Discrete-Choice Modeling: (Choice-Based Conjoint): a whole talk in and of itself, but here are a few highlights. Add other attributes to price-only to provide more realism to the task and decrease bias toward oversensitivity. Using Multinomial Logit introduces the “red bus – blue bus” problem (independence of irrelevant alternatives: IIA) – means that if we have 4 products with shares: 40%, 30%, 20%, and 10%, and say we cut the price for the first product so that its share increases to 50%, IIA takes the loss of 10% and distributes it proportionally away from the other 3 products (i.e., we now have, 50%, 25%, 16.7%, and 8.3%). Logit does this without “thinking”, no matter what the data might say. Use HB instead to Aggregate Logit to solve this problem.
For more information about how we can help you with your pricing research, contact us at contactus@elucidatenow.com.