Players in a game are "in equilibrium" if they are rational, and accurately predict other players' strategies. In many experiments, however, players are not in equilibrium. An alternative is "cognitive hierarchy" (CH) theory, where each player assumes that his strategy is the most sophisticated. The CH model has inductively defined strategic categories: step 0 players randomize; and step k thinkers best-respond, assuming that other players are distributed over step 0 through step k Ϫ 1. This model fits empirical data, and explains why equilibrium theory predicts behavior well in some games and poorly in others. An average of 1.5 steps fits data from many games.
Another social science looks at itself Experimental economists have joined the reproducibility discussion by replicating selected published experiments from two top-tier journals in economics. Camerer et al. found that two-thirds of the 18 studies examined yielded replicable estimates of effect size and direction. This proportion is somewhat lower than unaffiliated experts were willing to bet in an associated prediction market, but roughly in line with expectations from sample sizes and P values. Science , this issue p. 1433
Here we provide further details on the replications, the estimation of standardized effect sizes and complementary replicability indicators, the implementation of the prediction markets and surveys, the comparison of prediction market beliefs, survey beliefs, and replication outcomes, the comparison of reproducibility indicators to experimental economics and the psychological sciences, and additional results and data for the individual studies and markets. The code used for the estimation of replication power, standardized effect sizes, all complementary replication indicators, and all results is posted at OSF (https://osf.io/pfdyw/). Replications Inclusion criteriaWe replicated 21 experimental studies in the social sciences published between 2010 and 2015 in Nature and Science. We included all studies that fulfilled our inclusion criteria for:(i) the journal and time period, (ii) the type of experiment, (iii) the subjects included in the experiment, (iv) the equipment and materials needed to implement the experiment, and (v) the results reported in the experiment. We did not exclude studies that had already been subject to a replication, as this could affect the representativity of the included studies. We define and discuss the five inclusion criteria below. Journal and time period: We included experimental studies published in Nature andScience between 2010 and 2015. The reason for focusing on these two journals is that they are typically considered the two most prestigious general science journals. Articles published in these journals are considered exciting, innovative, and important, which is also reflected in their high impact factors. * Number of observations; number of individuals provided in parenthesis. † Replicated; significant effect (p < 0.05) in the same direction as in original study. ‡ Statistical power to detect 50% of the original effect size r. § Relative standardized effect size. * Belief about the probability of replicating in stage 1 (90% power to detect 75% of the original effect size).† Predicted added probability of replicating in stage 2 (90% power to detect 50% of the original effect size) compared to stage 1. * Mean number of tokens (points) invested per transaction. † Mean number of shares bought or sold per transaction.
Reduction of new product development cycle time and improvements in product performance have become strategic objectives for many technology-driven firms. These goals may conflict, however, and firms must explicitly consider the tradeoff between them. In this paper we introduce a multistage model of new product development process which captures this tradeoff explicitly. We show that if product improvements are additive (over stages), it is optimal to allocate maximal time to the most productive development stage. We then indicate how optimal time-to-market and its implied product performance targets vary with exogenous factors such as the size of the potential market, the presence of existing and new products, profit margins, the length of the window of opportunity, the firm's speed of product improvement, and competitor product performance. We show that some new product development metrics employed in practice, such as minimizing break-even time, can be sub-optimal if firms are striving to maximize profits. We also determine the minimal speed of product improvement required for profitably undertaking new product development, and discuss the implications of product replacement which can occur whenever firms introduce successive generations of new products. Finally, we show that an improvement in the speed of product development does not necessarily lead to an earlier time-to-market, but always leads to enhanced products.new product development, time-to-market, new product performance
Self-tuning experience weighted attraction (EWA) is a one-parameter theory of learning in games. It addresses a criticism that an earlier model (EWA) has too many parameters, by fixing some parameters at plausible values and replacing others with functions of experience so that they no longer need to be estimated. Consequently, it is econometrically simpler than the popular weighted fictitious play and reinforcement learning models. The functions of experience which replace free parameters "self-tune" over time, adjusting in a way that selects a sensible learning rule to capture subjects' choice dynamics. For instance, the selftuning EWA model can turn from a weighted fictitious play into an averaging reinforcement learning as subjects equilibrate and learn to ignore inferior foregone payoffs. The theory was tested on seven different games, and compared to the earlier parametric EWA model and a one-parameter stochastic equilibrium theory (QRE). Self-tuning EWA does as well as EWA in predicting behavior in new games, even though it has fewer parameters, and fits reliably better than the QRE equilibrium benchmark.
The authors develop and test a new model of store choice behavior whose basic premise is that each shopper is more likely to visit the store with the lowest total shopping cost. The total shopping cost is composed of fixed and variable costs. The fixed cost is independent of, whereas the variable cost depends on, the shopping list (i.e., the products and their respective quantities to be purchased). Besides travel distance, the fixed cost includes a shopper's inherent preference for the store and historic store loyalty. The variable cost is a weighted sum of the quantities of items on the shopping list multiplied by their expected prices at the store. The article has three objectives: (1) to model and estimate the relative importance of fixed and variable shopping costs, (2) to investigate customer segmentation in response to shopping costs, and (3) to introduce a new measure (the basket size threshold) that defines competition between stores from a shopping cost perspective. The model controls for two important phenomena: Consumer shopping lists might differ from the collection of goods ultimately bought, and shoppers might develop category-specific store loyalty.
The Bass diffusion model is a well-known parametric approach to estimating new product demand trajectory over time. This paper generalizes the Bass model by allowing for a supply constraint. In the presence of a supply constraint, potential customers who are not able to obtain the new product join the waiting queue, generating backorders and potentially reversing their adoption decision, resulting in lost sales. Consequently, they do not generate the positive "word-of-mouth" that is typically assumed in the Bass model, leading to significant changes in the new product diffusion dynamics. We study how a firm should manage its supply processes in a new product diffusion environment with backorders and lost sales. We consider a make-to-stock production environment and use optimal control theory to establish that it is never optimal to delay demand fulfillment. This result is interesting because immediate fulfillment may accelerate the diffusion process and thereby result in a greater loss of customers in the future. Using this result, we derive closed-form expressions for the resulting demand and sales dynamics over the product life cycle. We then use these expressions to investigate how the firm should determine the size of its capacity and the time to market its new product. We show that delaying a product launch to build up an initial inventory may be optimal and can be used as a substitute for capacity. Also, the optimal time to market and capacity increase with the coefficients of innovation and imitation in the adoption population. We compare our optimal capacity and time to market policies with those resulting from exogeneous demand forecasts in order to quantify the value of endogenizing demand.Marketing-Operation Interface, Bass Diffusion Model, New Product Forecasting, Capacity Planning
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.