"Decision Analysis" |
Index to Module One Notes |
1.1: Introduction to
Quantitative Analysis
This is a course about the use of
quantitative methods to assist in decision making. The subject matter
makes up the discipline known as decision sciences, or
you might hear it called management science or
operations research.
We will be covering a number of descriptive and prescriptive
mathematical models that have proven useful to managers, generally
since World War II, although some of the models date back to the
early 1900's. Mathematical models are simply representations of
reality that provide a framework for a scientific approach to the
study of managerial problems.
Models also help us gain insight into relationships such as the
relationship that exists between an objective and a constraint. For
example, we will be using linear programming prescriptive models to
represent the relationship between a profit maximization objective
and one or more limited resource constraints.
Suppose a company manufactures two products, X and Y. Product X has a
profit contribution of $100 and Y $300. The mathematical expression
for maximizing profit is simply:
Maximize Profit = 100 X + 300 Y
What should this company do to maximize profit? That's right, make an infinite number of Y's! But the real world presents constraints to every operation. Suppose that it takes 4 hours of labor to assemble an X, and 20 hours of labor to assemble each Y. If there are 140 hours of labor available in the next production period, we can expand the problem as follows:
Maximize Profit = 100 X + 300 YSubject to Labor Constraint: 4 X + 20 Y < 140
The labor constraint equation uses numbers to represent the technological rate of resource consumption of the two products (the coefficients of 4 and 20), as well as a less than or equal sign to put an upper limit on the 140 hour labor constraint. If we wanted to continue our strategy of making all Y's and no X's, we see that we could make 7 Y's by using all of the 140 hours of labor.
Let X = 0, then4 (0) + 20 Y = 140 gives20 Y = 140 or
Y = 140/20 = 7
Solving for Profit, we get:
Profit = 100 (0) + 300 (7) = $2,100
But there is another alternative worth
examining. We could make all X's and no Y's. Since X's require much
less labor, we can make 35 of them, at a profit of $3,500 - the
better alternative. To be sure we have the best or optimal
solution that maximizes our profit criteria, we would need
the mathematical model to check the profit contribution of all other
product mix combinations.
Obviously, this is a small problem that can be solved by observation.
Imagine what the linear program problem formulation looks like for an
auto manufacturer which much select between perhaps 10 models of one
make of car subject to a multitude of labor, equipment, and market
constraints. For these large problems we need a more formal problem
solving process.
The scientific approach to the study of managerial problems
incorporates a problem solving process. The text presents a
seven-step process:
1. Identify and define the problem2. Determine the set of alternative solutions
3. Determine the criterion or criteria that will be used to evaluate the alternatives.
4. Evaluate the alternatives (using an appropriate quantitative method or model).
5. Choose an alternative
6. Implement the selected alternative (the decision)
7. Evaluate the results and determine if a satisfactory solution has been obtained.
Decision-making is concerned with the first five steps
and will be our focus through this course. The first three steps are
concerned with structuring the problem, and the next two with
analysis. When the manager adds qualitative considerations to the
selection process, and follows through with implementation and
evaluation, the problem solving process is complete.
We will start off the course with descriptive models
that describe outcomes for selected alternatives, and in that manner
assist the decision-maker in selecting between alternatives. These
models include decision analysis, forecasting, project management,
and queuing or waiting line models.
For example, I hope you will enjoy the module on
queuing or waiting lines simply because
most of us probably have "war stories" about waiting in lines some
where at some time. Did you know that there are descriptive models
that predict how many customers will be standing in a given line, and
for how long, given that we know the customer arrival rate and system
service rate inputs? In this module, we will introduce models that
will prove to you why banks and Disney utilize a single line/multiple
server configuration to minimize waiting time in line for their
customers.
I hope you will also enjoy the forecasting module where
simple mathematical techniques are used to discover patterns such as
trend and seasonality in our data. In the forecasting module, we
drive home how important it is to measure and report the reliability
of a forecast - almost as important as the forecast itself.
The project management material helps us learn about
the importance of focusing on critical activities, and on the concept
of slack resources. The decision analysis material we
start with helps us appreciate the value that can be placed on
information as we study a model designed to help manager select
between decisions when faced with several state of nature
scenarios.
The first prescriptive or optimization
model that we will examine will be the classic economic order
quantity inventory model that has been used by managers
to determine how much to order and when for the past 80 years. The
model is prescriptive in the sense that we provide inputs such as
demand, the cost to carry and the cost to order inventory, and the
model selects the order quantity alternative that minimizes total
incremental inventory cost.
We wrap up the course with linear and integer
programming optimization models that select alternatives
which provide the best value for the problem criteria subject to
structural and environmental constraints. An example of an
optimization problem was presented at the beginning of this section.
Optimization models are very powerful and are used to assist
decision-makers in capital budgeting, project selection, resource
allocation, and scheduling and transportation problems.
Before we get started with our first module, Decision Analysis, you
may want to load the CD-ROM that comes with "The Management
Scientist." I took a break to load my CD-ROM on my laptop at this
point.
1. First, I broke open the seal and took out the CD-ROM (meaning I just bought the book!) and placed it in the CD Drive in my personal computer.Warning: As I proceeded with the installation steps below, I was warned to close any open applications before proceeding with the installation. Since I had Word open, I stopped the installation, closed Word, then continued with Step 2 below.2. Next, following the installation instructions on page 2 of The Management Scientist, I clicked the Start button in the Windows Task Bar and chose Run.
3. On the Command line, I typed D:\Setup (after noting to change "D" if my CD-ROM drive has a different letter designation).
4. I then clicked the OK button (or press the Enter key).
5. I then followed the prompts of the installation.
You may want to open the program now, or wait
till we get into an application in the next section of these notes.
Instructions for running "The Management Scientist" program are also
on page 2 of the software manual. Go to Programs from the
Start button; go to "The Management Scientist" program and
click on The Management Scientist or its logo. Click
Continue on the opening screen and you are ready to select one
of the twelve applications.
Let's now get started with our first subject, Decision
Analysis.
1.2: Structuring the Decision
Analysis Problem
Decision analysis, our first
quantitative method in this course, is concerned with selecting an
option or alternative course of action (the decision)
given prior knowledge of its outcome (called a payoff)
for various future scenarios (called states of nature
or events). The decision-maker has control over the process of
selecting an alternative course of action, but not over the states of
nature, at least not in the short run. Let's illustrate these terms
with an example.
Suppose a manufacturer has three alternative courses of action for the next production period. They can make their product (we will use the symbol d1 to represent the first decision alternative), buy their product from another manufacturer and sell it to their customers (d2), or they can do nothing in the next production period (d3). Suppose further that the manufacture has a simple forecast for the next productions period: demand will be low (state of nature symbol s1) or high (s2). The final input for the structure of the decision analysis problem is the outcome or payoff resulting from each state of nature/decision combination.
A convenient structure for displaying the
decision alternatives, states of nature and payoffs is a payoff
table.
Table 1.2.1
States of |
Nature |
|
Decision Alternatives |
|
|
Make Product, d1 |
|
|
Buy Product, d2 |
|
|
Do Nothing, d3 |
|
|
The payoff table shows, for example, that making the product leads to a profit of $90,000 should the demand turn out to be high, or a loss of $ 20,000 if demand is low. I will follow the convention in the text by giving the symbol Vij to represent the payoff associated with decision alternative i and state of nature j.
To continue with the table, buying the product shows that the manufacturer can avoid a loss, compared to making the product, if the demand turns out to be low since the manufacturer avoids paying fixed production costs. If the demand is high, the manufacturer makes a profit, but not as much as if they made the product since they miss the production economies of scale. If the manufacturer does nothing, a small profit is earned from selling existing inventory that just meets a low level of demand (both V31 and V32 equal $5,000).
Another way to display the structure of the
decision problem is with a decision tree.
Figure 1.2.1
|
||||
|
----- s1: Low Demand |
|
||
|
||||
|
--- d1: Make --------> |
|
||
|
|
|||
|
|
----- s2: High Demand |
|
|
|
||||
|
|
----- s1: Low Demand |
|
|
|
|
|||
|
--- d2: Buy ----------> |
|
||
|
|
|||
|
|
----- s2: High Demand |
|
|
|
||||
|
|
----- s1: Low Demand |
|
|
|
|
|||
|
--- d3: Do Nothing --> |
|
||
|
||||
|
----- s2: High Demand |
|
States of |
Nature |
|
|
Decision Alternatives |
|
|
|
Make Product, d1 |
|
|
|
Buy Product, d2 |
|
|
|
Do Nothing, d3 |
|
|
|
States of |
Nature |
|
|
Decision Alternatives |
|
|
|
Make Product, d1 |
|
|
|
Buy Product, d2 |
|
|
|
Do Nothing, d3 |
|
|
|
States of |
Nature |
|
|
Decision Alternative |
|
|
|
Make Product, d1 |
|
|
|
Buy Product, d2 |
|
|
|
Do Nothing, d3 |
|
|
|
*****************
YOU HAVE INPUT THE FOLLOWING PAYOFF TABLE:
******************************************
STATES OF NATURE DECISION
DECISION RECOMMENDATION
***********************
USING THE OPTIMISTIC CRITERION
DECISION CRITERION RECOMMENDED
ALTERNATIVE
VALUE
DECISION
Note that the output repeats your input,
then provides the criterion column for making the decision, as well
as the recommended decision for the criterion selected. For practice,
go ahead and run "The Management Scientist" for the conservative and
minimax regret criteria. Please note that the format of your OUT file
will be a little different than above as I had to insert tables to
conform to html format - you will not have to do that for a Word
file.
Your first case is Problem 18, McHuffter Condominiums, pp. 129-130 in
the text. The material we have covered up to this point should
prepare you to work on Part b. of this problem. You may also want to
sketch out the decision tree (Part a.), although that is not assigned
for turn-in with the other parts. As you work with "The Management
Scientist" to answer the various questions of this and other
problems, note that you have the opportunity to save your input file
as well as the output. For small problems such as this one, that may
not so important - but later on, such as in the forecasting material
where there is substantial input, it might be handy to save the data
input to a file.
This concludes the first approach to decision analysis:
decision-making under risk. One of the critiques of the approach is
that it works with limited information on the states of nature.
Because of that, a second approach has been developed to add
information on the states of nature concerning their relative
likelihood of occurrence. That is our next section of
material.
Module 1.4: Decision Making
with Probabilities
In this approach, the decision-maker
has information concerning the relative likelihood of each of the
states of nature. It is sometimes called "decision making under
uncertainty." The criterion used in decision-making strategy with
probabilities is to select that decision so as to maximize the
expected value of the outcome. To illustrate the approach, let's
refer again to the payoff table for our make-buy example, this time
adding a row for probabilities and a column for the expected value of
the decision alternatives.
Table 1.4.1
States of |
Nature |
|
|
Decision Alternative |
|
|
|
Make Product, d1 |
|
|
|
Buy Product, d2 |
|
|
|
Do Nothing, d3 |
|
|
|
Probabilities |
|
|
EV (d1) = [P (s1) * V11] + [P (s2) * V12]= [0.35 * -20,000] + [0.65 * 90,000]
= $51,500
The EV of $51,500 represents the long run
outcome of repeated "make product" experiments. That is, if we could
theoretically conduct the "make product" decision 100 times, 35 times
we would lose $20,000, and 65 times we would make $90,000. The
weighted average of these outcomes is $51,500. In reality, we do not
conduct the experiment 100 times - we make the decision once and we
are either going to lose $20,000 or make $90,000. However, and
this is very important, we use the expected value approach to
assist us in making the decision.
I should add, at this point, that to abide by the laws of
probability, each probability must be a real number between 0 and 1,
and the sum of the probabilities for the states of nature must sum to
one. For this to happen, the states of nature must be mutually
exclusive and exhaustive - that is, there cannot also be a state of
nature called, for example, medium demand. If there was such a state
of nature, it would have to be added to the payoff table and
accounted for with a third probability.
The expected values for the second and third decision alternatives
are calculated in a similar fashion. These are shown in Table 1.4.1.
Following the criterion of selecting that decision alternative which
maximizes the expected value of the outcome, we select decision
alternative d1 as our optimal or best strategy.
The expected value computations can be shown adjacent to the state of
nature nodes in the decision tree, as illustrated in Figure
1.4.1.
Figure 1.4.1
|
||||
|
----- s1: Low Demand: P(s1) = 0.35 |
|
||
|
||||
|
--- d1: Make -------> |
|
EVB = 0.35 * -20,000 + 0.65 * 90,000 = $51,000 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2)= 0.65 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1)= 0.35 |
|
|
|
|
|||
|
--- d2: Buy ---\\-----> |
|
EVC = 0.35 * 10,000 + 0.65 * 70,000 = $49,000 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2)= 0.65 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1)= 0.35 |
|
|
|
|
|||
|
--- d3: Do Nothing --> |
|
EVD = 0.35 * 5,000 + 0.65 * 5,000 = $5,000 |
|
|
||||
|
----- s2: High Demand: P(s2) = 0.65 |
|
The decision tree shows the expected value computations by each of
the state of nature nodes labeled B, C and D. Since the expected
value for the "make" decision" is the maximum, that decision branch
is selected as the optimum strategy. The "buy" and "do nothing"
branches are pruned, as indicated by the crosshatched
lines. The decision tree does not provide additional information, it
simply presents a picture of the decision strategy with
probabilities.
Using The Management Scientist for Decision Making with
Probabilities
Let's return to the software for a moment and rerun this example
problem using the expected value criterion for selecting a decision
alternative.
After you open "The Management Scientist" program, click on
File, then New, then enter 3 decision
alternatives, and 2 states of nature just as before. This
time, select State of Nature Probabilities, and then click
OK. Enter the numbers for the payoff table and the
probabilities for the two states of nature. Next, select
Solution, then Solve, and keep the default selection
Maximize the Payoff. As before, you can print this solution,
or better, save it to a file. I saved the solution to an Out file,
then inserted it to this open Word document as before.
Printout 1.4.1
DECISION ANALYSIS
*****************
YOU HAVE INPUT THE FOLLOWING PAYOFF TABLE:
******************************************
STATES OF NATURE DECISION PROBABILITIES OF STATES
DECISION RECOMMENDATION
***********************
USING THE EXPECTED VALUE CRITERION
DECISION CRITERION RECOMMENDED
ALTERNATIVE
VALUE
DECISION
EXPECTED VALUE OF PERFECT INFORMATION IS
10,500.00
The printout information confirms the computations
we made earlier in this section. For now, ignore the Expected Value
of Perfect Information shown at the bottom of the Decision
Recommendation. We will cover that in Section 1.5. Before that, we
need to go over a very important topic for any initial solution to
any quantitative method - sensitivity analysis.
Sensitivity Analysis
When you apply quantitative approaches to solve management
problems it is always a good idea to ask, "how sensitive is my
solution to changes in data input." This is especially true when the
input consists of subjective estimates. When the solution remains the
optimal solution given large changes in selected important inputs,
there is a high level of confidence in the solution. On the other
hand, when the optimal solution strategy changes for very small
changes to the one or more inputs, the decision-maker should be
cautious before implementation. Perhaps some time should be spent in
refining the input.
Suppose the decision-maker examines the solution and realizes that
the "buy" decision lost out to the "make" decision by only $2,500.
Further suppose the decision-maker has little confidence in the
payoff for the "buy product" decision under the "high demand" state
of nature. The question is, "at what "buy product, high demand"
payoff would I be indifferent between the "make" and "buy" decisions,
given all other applicable data input items remain the same?" To
answer this question, we note that the point of indifference occurs
where the "make" and the "buy" EV's are equal. Mathematically, this
is expressed as the equation:
EVB = EVC
Now writing out the computational formulas for the EV's:
[P (s1) * V11] + [P (s2) * V12] = [P (s1) * V21] + [P (s2) * V22]
Now, substitute the data input values except for the unknown "buy product, high demand" payoff, V22:
[0.35 * -20,000] + [0.65 * 90,000] = [0.35 * 10,000] + [0.65 * V22]
Next, solve the equation for V22:
0.65 * V22 = 51,500 - [0.35 * 10,000]V22 = 48,000 / 0.65V22 = 73,846
So, if the decision-maker erred by just 5.5%
(estimates the "buy product, high demand" payoff to be $70,000 when
it is really $73,846), the decision-maker would have selected the
wrong strategy.
As a rule of thumb, if a decision strategy changes based on a five or
less percent change to an input, we say that the solution is very
sensitive to change and it might be wise to consider investing in
more accurate data input values.
Probabilities are sometimes also subjective estimates when there has
been no history or experience with a particular state of nature. By
observation, we can see that as long as P (s1) remains at
or below 0.35 the decision-maker will favor the optimal "make
product" decision since a low probability for s1 gives
little relative weight to the $20,000 loss. But as P (s1)
increases (resulting in a decrease for P (s2)), the
expected values approach a point of indifference or a break-even
point. The question is, at what P (s1) would the
decision-maker be indifferent between the "buy" and "make" decisions,
all other data input remaining the same?
Again, we begin by setting EVB equal to EVC to
represent break-even:
EVB = EVC
Next, we write out the computational formulas for the EV's:
[P (s1) * V11] + [P (s2) * V12] = [P (s1) * V21] + [P (s2) * V22]
Now, substitute the data input values except for the unknown state of nature probabilities:
[P (s1) * -20,000] + [P (s2) * 90,000] =[P (s1) * 10,000] + [P (s2) * 70,000]
You may recall from an algebra class that we cannot solve one equation when there are two unknowns. However, recall that the sum of the probabilities of the states of nature must equal one. That is, with two states of nature:
P (s1) + P (s2) = 1
We can rewrite this as:
P (s2) = 1 - P (s1)
Then, we can substitute [1 - P (s1)] for each P (s2) in the break-even equation and continue solving for P (s1).
{P (s1) * -20,000} + {[1 - P (s1)] * 90,000} ={P (s1) * 10,000} + {[1 - P (s1) * 70,000]}
Now simplify the equation:
- 20,000 P (s1) + 90,000 - 90,000 P (s1) =10,000 P (s1) + 70,000 - 70,000 P (s1)
Finally, collect terms and solve for P (s1):
-110,000 P (s1) + 90,000 = - 60,000 P (s1) + 70,000- 50,000 P (s1) = - 20,000P (s1) = -20,000 / - 50,000 = 0.40
Conclusion: as long as P (s1) stays less than
or equal to 0.40, the "make" decision strategy will remain the
optimal strategy.
I was once involved in a decision analysis problem with a group of
medical doctors who owned their practice. The problem of interest was
whether to pay of a loan on a five-year schedule, or draw down cash
and pay off the loan now. We did a lot of sensitivity analysis on the
probabilities attached to states of nature representing the up or
down swing of the economy in the next five years. I know that working
formula after formula on the white board in their office would have
been boring to the doctors. So instead, we used the computer software
package to simply give us results for all of the desired changes. The
point is, whether you use "brute force" by changing the input values
in the payoff and probability tables in the software package, or run
a number of break-even formulas, it is a good idea to always check
how sensitive your solution is to changes in the inputs.
What if you find out that your solution is sensitive to changes in
the inputs. In that case, it may be appropriate to obtain additional
information concerning your input numbers to make them more accurate
or to increase your confidence in their original value. However,
while information has value, it is sometimes expensive to obtain. Our
last topic in decision analysis focuses on the value of
information.
I hope you are enjoying the first quantitative method of the course.
This technique has found countless applications in assisting managers
make decisions. I have personally helped companies use decision
analysis in equipment loan, construction, and enterprise resource
planning situations. I continue to believe that the formal steps in
structuring the problem, which forces the decision-maker to consider
decision alternatives, states of nature, probabilities and payoffs,
is almost as valuable as coming up with the decision itself. As well,
every project I have worked on found considerable benefit from
attention to sensitivity analysis. Sorry for rambling - back to the
subject.
1.5 The Value of
Information
We made the point that information has
value, but it also costs money and time to obtain. So the question
becomes, how much would be willing to pay for additional information.
The upper limit is established by the first concept, the Value of
Perfect Information.
The Value of Perfect Information
Let's look at the payoff table once again.
Table 1.5.1
States of |
Nature |
|
|
Decision Alternative |
|
|
|
Make Product, d1 |
|
|
|
Buy Product, d2 |
|
|
|
Do Nothing, d3 |
|
|
|
Probabilities |
|
|
EV w PI = [P (s1) * V21] + [P (s2) * V12]= [ 0.35 * 10,000 ] + [ 0.65 * 90,000 ] = $62,000
Recall that without this perfect information, the expected value was $51,500 by going with d1. The difference between the expected value with perfect information and the expected value without perfect information is called the Expected Value of Perfect Information (EVPI):
EVPI = EV w PI - EV w/o PI = 62,000 - 51,500 = $10,500
EVPI places an upper bound on how
much we should be willing to pay someone to get additional
information on the future states of nature in order to improve our
decision making strategy. Market Research State of Nature Low Demand, s1 High Demand, s P( s1| I1)=.315 /
0.445 =0 .708 P( s2| I1 )
=0.13/0.445 = 0.292 P( s1| I2)=.035 /
0.555 = 0 .063 P( s2| I2 )=0.52 /
0.555 = 0.937
For example, we might be willing to pay a consultant for a market
research study to help us learn more about the demand states of
nature. We don't expect the research study to result in perfect
information, but we hope it will represent a good proportion of the
$10,500. We also know that we do not want to pay more than $10,500
for the new information.
New information about the states of nature takes the form of
revisions to the probability estimates in decision analysis. This
information may be obtained through market research studies, product
testing or some other sampling process. In decision analysis, this
topic is called the Expected Value of Sample
Information.
The Expected Value of Sample Information (EVSI)
Recall that we have probabilities on the states of nature, P
(s1) and P (s2). These are more technically
called prior probabilities. If we plan to obtain sample information,
the results will be expressed as revisions to these prior
probabilities.
Suppose a market research consultant offers to conduct a demand
analysis that will result in two indicators.
I1 will be the indicator used to represent a
prediction of low demand in the next production period.
I1 then corresponds to the state of nature we labeled
s1, the difference is that one is a prediction and one is
an actual occurrence. I2 will be the indicator used to
represent a prediction of high demand in the next production period.
This indicator corresponds to our state of nature s2.
This indicator information will be used to calculate conditional
probabilities. P (s1 given I1) will be the
probability that s1 will occur given or conditioned on the
consultant's I1 prediction. P (s1 given
I1) is generally written as P (s1 |
I1). Since there are two states of nature and two
indicators, we will need three additional conditional probabilities,
P (s1 | I2), P (s2 | I1)
and P (s2 | I2).
In order to compute these conditional probabilities and then EVSI, we
need to know the consultant's track record. That is, how has the
consultant done in prior market demand studies? In terms of decision
analysis, we want to know the accuracy of indicator predictions given
what state of nature actually occurred. Suppose the following
information is available from the consultant.
Table 1.5.2.
We are almost there, but note that these conditional probabilities
are the reverse of what we need. To get from P (I1 |
sj) to P (sj | I1), we need to set
up a table and do some calculations. Same for Indicator 2. Let's do
I1 first.
Table 1.5.3.
Nature, sj
P (sj)
Record, P ( I1| sj )
P ( I1 and sj)
P ( sj | I1 )
The first three columns in Table 1.5.3 are from
input information. The joint probabilities column is computed by
multiplying the prior probabilities times the consultant track record
probabilities in each row. The conditional probabilities in the last
column are computed by dividing the joint probabilities by
P(I1).
Note how consultant research information can change the estimates on
the probabilities. Now the probability of low demand becomes 0.71
given that the consultant predicts low demand, and taking into
account the consultant's track record. This revision reflects prior
accuracy for the low demand case. Note that the probability of high
demand, s2 goes from 0.65 to 0.29 given that the
consultant predicts that demand will be low.
I believe the above steps illustrate the power of the complete
decision analysis technique. The steps illustrate how information has
value to justify its expense. Now, we do similar calculations for
Indicator two in Table 1.5.4.
Table 1.5.4.
Nature, sj
P (sj)
Record, P ( I2| sj )
P ( I2 and sj)
P ( sj | I2 )
Although we will use the computer to produce the
above conditional probabilities as well as the following results, it
is instructive to picture the decision problem incorporating the
sample information. This is done in the decision trees in Figure
1.5.1, 1.5.2 and 1.5.3.
Figure 1.5.1
|
|||||
|
----- I1: Predict Low Demand: P(I1)= 0.445 |
|
|
||
|
|||||
|
-- Do Mkt Research-> |
|
EVB = 0.445 * 27,500 + 0.555 * 83,100 = $58,400 |
||
|
|
||||
|
|
----- I2: Predict High Demand: P(I2 )= 0.555 |
|
|
|
|
|||||
|
|||||
|
|||||
|
-- No Mkt Research-> |
EV w/o Perfect Information = $51,500 (Make Product) |
|||
Figure 1.5.1 shows a new decision to our
problem: the decision of whether or not to engage the services of the
market research consultant. If the manufacturer does not engage the
services, the expected value has already been determined to be
$51,500 by selecting d1, "make " the product. Note that I
have added the expected value that would be obtained should the
decision maker engage the services of the marketing consultant,
$58,400. The first event after engaging the services of the
consultant is the consultant's predictions.
The consultant's predictions are then followed by the manufacturer's
decisions to "make,", "buy," or "do nothing;" which are then followed
by the actual states of nature, low and high demand. Figure 1.5.2
illustrates the decision tree leading from the node labeled C in
Figure 1.5.1, the consultant's prediction of low demand.
Figure 1.5.2.
|
||||
|
----- s1: Low Demand: P(s1|I1) = 0.707 |
|
||
|
||||
|
--- d1: Make -------> |
|
EVE = 0.707*-20,000 + 0.292*90,000 = $11,900 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2|I1) = 0.292 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1|I1) = 0.707 |
|
|
|
|
|||
|
--- d2: Buy --------> |
|
EVF = 0.707*10,000 + 0.292*70,000 = $27,500 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2|I1) = 0.292 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1|I1) = 0.707 |
|
|
|
|
|||
|
--- d3: Do Nothing --> |
|
EVG = 0.707*5,000 + 0.292*5,000 = $5,000 |
|
|
||||
|
----- s2: High Demand: P(s2|I1) = 0.292 |
|
This part of the decision tree provides back-up
for the calculation of the $27,500 expected value for node 3. That
value represents the highest expected value and is associated with
the "buy" decision. Thus, if the consultant is hired to conduct the
market research, and predicts low demand, the best decision is to
"buy" the product. When working out decision trees by hand, we always
start at the right of the tree, move left, pruning branches coming
from decision nodes as we go. You should observe that every state of
nature node gets an expected value computation, the sum of the
products of probabilities times payoffs. Every decision node simply
gets the highest expected value from its branches.
Next we examine the computations for the node labeled D in Figure
1.5.1., the event that occurs if we engage the consultant and the
consultant's prediction is high demand. This is shown in Figure
1.5.3.
Figure 1.5.3.
|
||||
|
----- s1: Low Demand: P(s1|I2) = 0.063 |
|
||
|
||||
|
--- d1: Make -------> |
|
EVE = 0.063*-20,000 + 0.937*90,000 = $83,100 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2|I2) = 0.937 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1|I2) = 0.063 |
|
|
|
|
|||
|
--- d2: Buy ----\\--> |
|
EVF = 0.063*10,000 + 0.937*70,000 = $66,200 |
|
|
|
|||
|
|
----- s2: High Demand: P(s2|I2) = 0.937 |
|
|
|
||||
|
|
----- s1: Low Demand: P(s1|I2)= 0.063 |
|
|
|
|
|||
|
--- d3: Do Nothing --> |
|
EVG = 0.063*5,000 + 0.967*5,000 = $5,000 |
|
|
||||
|
----- s2: High Demand: P(s2|I2) = 0.967 |
|
EVSI = EV w SI - EV w/o SI = 58,400 - 51,500 = $6,900
Like EVPI, this is a powerful piece of
information. We now know how much we would be willing to spend to
obtain market research from the consultant - that is a great
bargaining chip to have at negotiation time!
Efficiency of the Sample Information
Pause and Reflect
Recall that the expected value of perfect information was $10,500 - that placed a theoretical upper bound on how much we would be willing to spend for perfect information. But forecasts and consultants are not perfect. If we can get the "track record" of a forecaster consultant, we can then determine how they stack up to the theoretical EVPI.
To compute the efficiency of the sample information, we simply divide the EVSI by EVPI and convert the decimal to a percent:
Efficiency = EVSI / EVPI = 6,900 / 10,500 = .657 = 65.7 %
The closer this number is to 100%, the
better.
I am not suggesting that every decision analysis exercise includes
this last procedure of computing EVSI and its efficiency. Often the
track record of the sample information producer may not be available.
But when it is (and it should ALWAYS be available if you do the
market research "in-house", this is a powerful tool to put some
limits on how much you would want to spend to acquire more
information.
I appreciate that the computation of EVSI is tedious - it has taken
me hours to produce the decision trees in power point, convert them
to html images and insert them into to these pages (I am not
complaining - I always wanted to learn how to convert power point
slides to html text images and now I know!). Fortunately, "The
Management Scientist" does all of this work for us. We will close
this first module with a demonstration of the computer output for
EVPI and EVSI.
Using "The Management Scientist" for Computing EVPI and EVSI
Open up the software program as before. This time, when you
select the Decision Analysis Module, then file, and enter a
new problem, click State of Nature Probabilities after
you enter the number of decisions and number of states of nature.
After you click State of Nature Probabilities, also click
Compute Decision Strategies, followed by entering 2
when you are asked for the Number of Indicator Outcomes. The
data entry screen then asks for the payoffs, the probabilities and
the "track record" conditional probabilities about the
consultant.
When you select solve, the following file can either be
printed or saved to a Management Scientist Out file.
Printout 1.5.1
DECISION ANALYSIS
*****************
YOU HAVE INPUT THE FOLLOWING PAYOFF TABLE:
******************************************
STATES OF NATURE DECISION PROBABILITIES OF STATES
DECISION RECOMMENDATION
***********************
USING THE EXPECTED VALUE CRITERION
DECISION CRITERION RECOMMENDED
ALTERNATIVE
VALUE
DECISION
EXPECTED VALUE OF PERFECT INFORMATION IS 10,500.00
DEVELOPING A DECISION STRATEGY BASED ON SAMPLE INFORMATION
**********************************************************
YOU HAVE INPUT THE FOLLOWING PROBABILITIES FOR INDICATOR OUTCOMES:
******************************************************************
GIVEN INDICATOR INDICATOR
STATE
OUTCOME 1
OUTCOME 2
OPTIMAL DECISION STRATEGY
*************************
IF BEST EXPECTED INDICATOR
INDICATOR
DECISION
VALUE
OUTCOME
EXPECTED VALUE OF THE ABOVE STRATEGY 58,350.00
EXPECTED VALUE OF THE SAMPLE INFORMATION 6,850.00
EFFICIENCY OF THE SAMPLE INFORMATION 65.2%
You should be able to pick out all of the terms
and their values in the computer output from our discussion. Please
note that I rounded my expected value computations to the nearest 100
so that my numbers will not be as accurate as the computer output. I
should also note that there are special decision analysis software
packages, even one created by Microsoft. The special packages have
the same capabilities as our text package, plus many more features.
For example, the special packages will "draw" the decision tree,
allow multiple branching such as state of nature branches followed by
more state of nature branches, and handle much larger problems than
10 decision nodes. The special packages also cost in the $100's and
only do decision analysis. Our package may have limitations but it
includes 11 other modules!
Well, that's it! You should now be ready to work the first case
assignment, Problem 18, parts b.-g. in the text. Please e-mail any
questions that you may have. Remember, for full credit, the problem
is due January 27, 2001.
That finishes our first quantitative method. I believe decision
analysis is a good place to start since it illustrates the five-step
decision-making process in a picture called the decision tree. The
tree shows the decision alternatives to the problem, the factors that
effect the alternatives (states of nature and their probabilities),
the outcomes of decision alternatives (payoffs) and the criterion for
selection (maximize expected value). Please note that our
illustration involved maximize expected value - if you were working a
minimize problem (such as select alternatives so as to minimize
costs) you would simply indicate that criterion at the appropriate
input screen.
|
|
|