For a given linear programming model, finding the optimal solution is of major importance. But it is not the only information available. There is a very good amount of sensitivity information. It is basically the information that accounts for what happens when data values are changed.
Sensitivity analysis basically talks about how the uncertainity in the output of a model can be attributed to different sources of uncertainity in the input model. Uncertainity analysis is a related practice which quantifies the uncertainity in the output of a model. In an ideal situation, uncertainity and sensitivity analysis must run in tandem.
If a study is carried out which involves some form of statistical modelling (forming mathematical equations involving variables), sensitivity analysis is used in order to investigate exactly how robust the study is. It is also used for a wide range of other purposes including decision making, error checking in models, understanding the relationship between input and output variables and enhancing communication between the people who make the decisions and the people involved in constructing the models.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
For example, we know that there are some variables which are always uncertain in a budgeting process. Operating expenses, future tax rates, interest rates etc. Are some of the variables which may not be known with a great amount of accuracy. In this regard sensitivity analysis basically helps us in understanding that if these variables deviate from their expected values, then how will the business, model or system that is being analyzed will be affected.
An assumption called certainty assumption needs to be invoked in order to formulate a problem as a linear program. The assumption involved knowing what value the data took on, and decisions are made based on that data. However, this assumption is somewhat doubtful: the data might be unknown, or guessed at, or otherwise inaccurate. Thus, determining the effect on the optimal decisions if the values are changed is clearly not feasible because some numbers in the data are more important than others. Can we find the important numbers? Can we determine the effect of misestimation?
In order to address these questions, linear programming is very handy. Data changes are showed up in the optimal table. A case study using involving sensitivity analysis is worked upon using solver in the later part of the report.
1.2. TABLEAU SENSITIVITY ANALYSIS
Assume that we solve a linear program by ourselves which ends up with an optimal table (or tableau to use a more technical term).We know what an optimal table looks like: It has all the non-negative values in the row 0 (which we also refer to as the cost row), all non-negative right-hand-side values, and an identity matrix embedded. If we have to determine the effect of a change in the data, we will have to try and determine how that change affected the final tableau and thus, try and reform the final tableau accordingly.
1.2.1. COST CHANGES
The first change that we will consider is changing the cost value by some delta in the original problem. The original problem and the optimal table are already given. If the same exact calculations are done with the modified problem, we would have the same final optimal table except that the corresponding cost entry would be lower by delta (this happens because the only operations which we do with the first row are add or subtract scalar multiples of it through m to other rows: we never add or subtract the scalar multiples of row 0 to the other rows). For example,let us take the problem
Max 3x+2y
Subject to
x+y <= 4
2x+y <= 6
x,y >= 0
The optimal table to this problem after adding the slack variables s1 and s2 to place in standard form is:
Now let us assume that the cost of x has been changed to 3+delta in the original
formulation, from its previous value of 3.Now when if perform the exact same operations as before, that is the same pivots, we would end up with the tableau:
Now the table obtained is not the optimal table because it does not have a correct basis. This however can be corrected if we keep the same basic variables by adding delta times the last row to the cost row. This gives the table:
Now this table that we have obtained has the same basic variables and the same values of the variables that our previous solution had except z. This represents an optimal solution only if the cost row is all non-negative. This is true only if
1 – delta >= 0
1 + delta >= 0
which holds for -1 <= delta <= 1. For any delta in that range, our previous basis is optimal. However, the new objective is 10 + 2delta.
In the previous example, we changed the cost of a basic variable. The following example shows what happens when the cost of a non-basic variable is changed.
Max 3x + 2y + 2.5w
Subject to
x + y + 2w <= 4
2x + y + 2w <= 6
x,y,w >= 0
The optimal table in this case is:
Let us change the cost of w from 2.5 to 2.5+delta.Doing the same calculations as before will result in the optimal table :
As shown in this case, we have a valid optimal table. This will represent an optimal solution if 1.5-delta >= 0, so delta <= 1.5.As long as the objective coefficient of w is no more than 4 in the original formulation, the solution we have found out will remain optimal.
The value present in the cost row in the simplex tableau is called reduced cost. This value is always 0 for a basic variable and in an optimal tableau, it is non-negative for all other variables.
Thus to summarize the entire thing, we conclude that if we change the objective function values in the original formulation, it will cause a change in the cost row in the final optimal table. There might be a necessity to add a multiple of a row to the cost row to keep the form of the basis. The resulting analysis that will follow will depend only on keeping the cost row non-negative.
1.3. SHADOW PRICES, THEIR RANGES, AND PRICING OUT
Most of the LP software models available today provide an extra output in the form of shadow prices on all the constraint. The shadow price of a constraint is the rate at which the optimal value of the problem would change when small changes in the available amount of the resource corresponding to that constraint occur.
Shadow prices can give managers a thorough knowledge of the economics of an enterprise. It not only lets one know how much he/she should be prepared to pay for an increase in capacity, but also helps in “pricing out” new activities which includes making a new product. “Pricing out” basically means comparing the contribution obtained from one unit of the activity to the opportunity cost of diverting it to scare resources that could be used for other things. Let us assume a situation where the manager of another division approaches our plant manager, Judas, with a request to “rent” one ton per day of machine time to produce some product of his own. Now exactly how much should Judas? Now this is where using shadow pricing can help Judas figure out exactly how much he should charge the manager in order to get a better bargain.
When a firm evaluates a new product, the firm basically checks whether the new item is “profitable” by comparing the revenue and the actual costs. The costs basically include the variable costs as well as the costs allocated to cover any overheads. Now this is where the problem lies. Firstly, regardless of whether or not the project is pursued, there will be some overheads which will not change. Secondly, the analysis ignores any scarce resources that have been diverted from any other activities. The analysis does the job of comparing the direct variable costs and the opportunity costs of diverting the needed scarce resources to the new product revenues. Now there might be a complex situation where the opportunity costs are not really that obvious in cases where firms are producing a variety of products using a wide range of resources. The advantage of linear programming is that shadow pricing allows for a very quick and efficient assessment of the opportunity costs of diverting resources.
1.4. APPLICATIONS
Sensitivity analysis can be used for a number of purposes including:
- To make models much simpler and easier to grasp
- To check how resistant the model predictions are subject to any change
- As an important feature of assurances in quality
- To find out the impact of various input assumptions and scenarios involving any kind of change
It also provides information on:
- Factors that contribute to the change in output
- The region within the space of input factors for which the output of the model is either maximum or minimum or it exists within some pre-defined criteria
- Interaction between many factors
- Optimal regions within the space
- Optimal regions within the space of factors for use in a following study of measures
1.4.1. CHEMISTRY
Sensitivity analysis is used quite frequently in many areas of physics and chemistry.
Now that we have knowledge about kinetic mechanisms under investigation and with the advance of power of modern computing technologies, detailed complex kinetic models are being used as predictive tools more often than ever before and as aids for understanding the underlying phenomena. A kinetic model is usually depicted by a set of differential equations showing the concentration-time relationship. Sensitivity analysis has shown that it is the apt instrument to examine a complex kinetic model.
Kinetic parameters are being found out from experimental data through nonlinear estimation. Sensitivity analysis can be used for optimal experimental design. This includes determining initial conditions, measurement positions and sampling time to generate informative data which are important to finding out accuracy. A great number of parameters in a complex model can be used for estimation but not necessarily all are estimable. Sensitivity analysis can be used to identify the important parameters which can be found out from available data while removing the unimportant ones. Sensitivity analysis has applications in identifying the redundant species and reactions allowing for model simplification.
1.4.2. ENVIRONMENTAL
Computer environmental models are being used in a number of studies and different applications. For instance, the global climate model is used for weather forecasts as well as climate change.
These models are being used for environmental decision making at a local scale including analyzing the impact of waste water treatment plant, or for analyzing the behaviour of bio-filters for polluted waste water.
In both of the above mentioned scenarios, sensitivity analysis basically helps in understanding how the many sources of uncertainity contribute to the model output uncertainity and system performance in general. Depending on the complexity of the model, different strategies of sampling are beneficial and sensitivity indexes have to be sought after to cover multivariate sensitivity analysis and interrelated inputs in these cases.
1.4.3. BUSINESS
In a decision problem, the analyst should find out cost drivers along with other quantities for which there is a necessity to gain better knowledge so that an informed decision can be made. On the other hand, there are predictions which are not influenced by some quantities, so that resources can be saved without compromising on the precision by relaxing some conditions.
Sensitivity analysis can help in a number of other situations provided the settings given below are provided:
- To identify important assumptions or compare alternative model structures
- Guide data collections in the future
- Discover important criteria
- The tolerance of the manufactured parts must be optimized in terms of the uncertainity in the variables
- The allocation of resources must be optimized
- The model must be simplified
However, there are quite a few issues associated with sensitivity analysis in the context of business:
Most of the times the variables are dependent on each other, which makes examining each one individually not feasible. For example change in a variable like sales volume, will most likely affect other factors such as the selling price.
The assumptions are made by using past data/experience and thus may not be valid in the future. So the analysis made on these assumptions is not valid either.
In order to assign a maximum and minimum value, subjective interpretation is accounted for. For example, one person’s forecast may be more conservative than that of another person performing the analysis in his own. Due to subjective analysis the accuracy and objectivity of the analysis will be affected because each and every person has a different perspective of viewing things.
1.4.4. ENGINEERING
The latest engineering design makes heavy usage of computer models to test designs before they are made. Sensitivity analysis allows designers to determine the effects and sources of uncertainities, in the interest of making robust models. Sensitivity analyses have been done in biomechanical models amongst others.
1.4.5. IN META-ANALYSIS
In a meta analysis, a sensitivity analysis tests if the results are reactive to restrictions on the data included. Common instances are large trials only, higher quality trials only, and more recent trials only. If the results are stable, it proves that an effect exists as well as a statistical framework for designing observations that can be relied upon.
1.4.6. MULTI-CRITERIA DECISION MAKING
It is quite possible that sensitivity analysis shows surprising results about the subject of interest. For example, the field of multi-criteria decision making finds out the problem of how to find the best option among many competing alternatives. This is very important in decision making. In such a scenario each option is described in terms of a set of evaluative criteria. These criteria are associated with weights signifying how important they are. It would occur to most that larger the weight for a criterion, the more important that criterion should be. However, this may not always be true. It is important to understand here, the difference between criticality and importance. By critical, we basically mean that a criterion with a small change can cause a huge change of the final solution. It is possible criteria with very small weights of importance to be of much more importance in a given situation than ones with larger weights. This implies that a sensitivity analysis may shed light into issues not anticipated at the beginning of a study. This might improve the effectiveness of the initial study enormously and help in the successful implementation of the final solution.
1.5. PITFALLS AND DIFFICULTIES
The most common difficulties associated with sensitivity analysis include:
- There are too many model inputs to understand and analyze. Screening could be used to simplify dimensionality.
- The model takes a lot of time to run. Emulators could be used to reduce the number of model runs needed.
The amount of information needed to construct probability distributions for the inputs is insufficient. However, they may be built with the help of expert advice. Even then it may be tough to build these distributions with good confidence. The subjectivity of these distributions or ranges will influence sensitivity analysis greatly.
The purpose of the analysis is not clear. Different statistical tests and measures are applied to the problem and different factor rankings are obtained. The test should be done to suit the purpose of the analysis. For example, one uses Monte Carlo filtering if one is interested in which factors are most responsible for producing high/low output values.
Too many model outputs are taken into account. This might be approved for quality assurance of sub-models but should be ignored when presenting the results of the complete analysis.
Piecewise sensitivity. This is when one performs sensitivity analysis on one sub-model at a time. However, this approach is non conservative as it might ignore interactions among factors in various sub-models (Type 2 error).
1.6. RELATED CONCEPTS
Sensitivity analysis is closely related with uncertainity analysis. Uncertainity analysis deals with the overall uncertainity in the final analysis of the study while sensitivity analysis basically tries to identify what source of uncertainity bases itself more on the study’s conclusions.
The problem setting in sensitivity analysis has a lot of likeness with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the treatment) on some objects which are the experimental units. In sensitivity analysis one looks at the effect of changing the inputs of a mathematical model on the output of the model itself.
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
CHAPTER 2
CASE STUDY AND FORMULATION
2.1. THE GLOBAL OIL COMPANY
The Global Oil Company is an international producer, refiner, transporter and distributor of oil, gasoline and petrochemicals. Global Oil is a holding company with subsidiary operating companies that are wholly or partially owned. A major problem for Global Oil is to coordinate the actions of these various subsidiaries into an overall corporate plan, while at the same time maintaining a reasonable amount of operating autonomy for the subsidiary companies.
To deal with this dilemma, the logistics department at Global Oil Headquarters develops an annual corporate-wide plan, which details the pattern of shipments among the various subsidiaries. The plan is not rigid but provides general guidelines and the plan is revised periodically to reflect changing conditions. Within the framework of this plan, the operating companies can make their own decisions and plans. This corporate-wide plan is presently done on a trial and error basis. There are two problems with this approach. First, the management of the subsidiaries often complains that the plan does not reflect properly the operating conditions under which the subsidiary operates. The plan sometimes calls for operations or distribution plans that are impossible to accomplish. And secondly, corporate management is concerned that the plan does not optimize for the total company.
The technique of linear programming seems a possible approach to aid in the annual planning process, that will be able to answer at least in part, the two objections above. In addition the building of such a model will make it possible to make changes in plans quickly when the need arises. Before embarking on the development of a worldwide model, Global Oil asks you to build a model of the Far Eastern operations for the coming year.
2.1.1. FAR EASTERN OPERATIONS
The details of the 1998-planning model for the Far Eastern Operations are now described.
There are two sources of crude oil, Saudi Arabia and Borneo. The Saudi crude is relatively heavier (24 API), and the Far Eastern sector could obtain as much as 60,000 barrels per day at a cost of $18.50 per barrel during 1998. A second source of crude is from the Brunei fields in Borneo. This is a light crude oil (36 API). Under the terms of an agreement with the Netherlands Petroleum Company in Borneo, a fixed quantity of 40,000 b/d of Brunei crude, at a cost of $19.90 per barrel is to be supplied during 1998.
There are two subsidiaries that have refining operations. The first is in Australia, operating a refinery in Sydney with a capacity of 50,000 b/d throughput. The company markets its products throughout Australia, as well as having a surplus of refined products available for shipment to other subsidiaries.
The second subsidiary is in Japan, which operates a 30,000 b/d capacity refinery. Marketing operations are conducted in Japan, and excess production is available for shipment to other Far Eastern subsidiaries.
In addition, there are two marketing subsidiaries without refining capacity of their own. One of these is in New Zealand and the other is in the Philippines. Their needs can be supplied by shipments from Australia, Japan, or the Global Oil subsidiary in the United States. The latter is not a regular part of the Far Eastern Operations, but may be used as a source of refined products.
Finally, the company has a fleet of tankers that move the crude oil and refined products among the subsidiaries.
2.1.2. REFINERY OPERATIONS
The operation of a refinery is a complex process. The characteristics of the crudes available, the desired output, the specific technology of the refinery, etc., make it difficult to use a simple model to describe the process. In fact, management at both Australia and Japan has complex linear programming models involving approximately 300 variables and 100 constraints for making detailed decisions on a daily or weekly basis.
For annual planning purposes the refinery model is greatly simplified. The two crudes (Saudi and Brunei) are input. Two general products are output – (a) gasoline products and (b) other products such as distillate, fuel oil, etc. In addition, although the refinery has processing flexibility that permits a wide range of yields, for planning purposes it was decided to include only the values at highest and lowest conversion rates (process intensity). Each refinery could use any combination of the two extreme intensities. These yields are shown in Table 6.1.
The incremental costs of operating the refinery depend somewhat upon the type of crude and process intensity. These costs are shown in Table 6.1. Also shown are the incremental transportation costs from either Borneo or Saudi Arabia.
2.1.3. MARKETING OPERATIONS
Marketing is conducted in two home areas (Australia and Japan) as well as in the Philippines and New Zealand. Demand for gasoline and distillate in all areas has been estimated for 1998.
2.1.4. TANKER OPERATIONS
Tankers are used to bring crude from Saudi Arabia and Borneo to Australia and Japan and to transport refined products from Australia and Japan to the Philippines and New Zealand. The variable costs of these operations are included above.
However, there is a limited capacity of tankers available. The fleet has a capacity of 6.5 equivalent (standard sized) tankers.
The amount of capacity needed to deliver one barrel from one destination to another depends upon the distance traveled, port time, and other factors. The table below lists the fraction of one standard sized tanker needed to deliver 1,000 b/d over the indicated routes.
It is also possible to charter independent tankers. The rate for this is $5,400 per day for a standard sized tanker.
2.1.5. UNITED STATES SUPPLY
United States operations on the West Coast expect a surplus of 12,000 b/d of distillate during 1998. The cost of distillate at the loading port of Los Angeles is $20.70 per barrel. There is no excess gasoline capacity. The estimated variable shipping costs and tanker requirements of distillate shipments from the United States are:
Variable costs Tanker
of shipments requirements
New Zealand
1.40
0.18
Philippines
1.10
0.15
2.2. SOLUTION AND ANALYSIS
2.2.1. FORMULATION
Decision variables:
The company procures crude from two sources Saudi (S) and Brunei (B).
The two refineries in Australia (A) and Japan (J) using Low intensity process (L) and High intensity process (H) produce Gasoline (G) and Distillate (D). These products are consumed within Australia and Japan and are also exported to Philippines (P) and New Zealand (N). In addition to these refineries, the company buys distillate from United States (U)
The various decision variables have been defined as follows:
The objective is to minimize the cost of the Far East operations. Hence the objective function is as follows:
Min (20.04SLA + 20.34SHA + 20.66BLA + 20.98BHA + 20.08SLJ + 20.46SHJ + 20.7BLJ + 21.06BHJ + 0.3GAP + 0.2GAN + 0.4GJP + 0.25GJN + 0.3DAP + 0.2DAN + 0.4DJP + 0.25DJN + 21.80DUP + 22.10DUN + 5.4CT)
Constraints:
There are four types of constraints: Source, Capacity, Demand and Tanker Capacity
Source constraints:
SAUDI SUPPLY: SLA + SHA + SLJ + SHJ <= 60 thd b/d – 1
BRUNEI SUPPLY: BLA + BHA + BLJ + BHJ <= 40 thd b/d – 2
Capacity constraints:
AUSTRALIA: SLA + SHA + BLA + BHA <= 50 thd b/d – 3
JAPAN: SLJ + SHJ + BLJ + BHJ <= 30 thd b/d – 4
Demand constraints:
AUSTRALIAN GAS : 0.19SLA + 0.31SHA + 0.26BLA + 0.36BHA – GAP – GAN >= 9 – 5
AUSTRALIAN DISTILLATE: 0.73SLA + 0.61 SHA + 0.69BLA + 0.58BHA – DAP – DAN >= 21 – 6
JAPAN GAS: 0.18SLJ + 0.30SHJ + 0.25BLJ + 0.35BHJ – GJP – GJN >=3 – 7
JAPAN DISTILLATE: 0.74SLJ + 0.62SHJ + 0.70BLJ + 0.59BHJ – DJP – DJN >= 12 – 8
PHILIPPINES GAS: GAP + GJP >= 5 – 9
PHILIPPINES DISTILLATE : DAP + DJP + DUP >= 8 – 10
NZ GAS: GAN + GJN >= 5.4 – 11
NZ DISTILLATE: DAN + DJN + DUN >= 8.7 – 12
US DISTILLATE: DUP + DUN <= 12 – 13
Tanker capacity constraint:
0.12SLA + 0.12SHA + 0.05BLA + 0.05BHA + 0.11SLJ + 0.11SHJ + 0.05BLJ + 0.05BHJ + 0.02GAP + 0.01GAN + 0.01GJP + 0.06GJN + 0.02DAP + 0.01DAN + 0.01DJP + 0.06DJN + 0.15DUP + 0.18DUN – CT) <= 6.5 – 14
Solving the objective function for the given 14 constraints we get the optimal solution as mentioned earlier. The optimal cost for the Far East operations is $ 1594.64 per thd b.
CONCLUSION
The report is a compilation of the explanation of all the concepts used in the project.
A major strength of sensitivity analysis is its simplicity and ease of use.
Sensitivity analysis using linear programming can hence handle relatively large numbers of variables, constraints and objectives.
Cite This Work
To export a reference to this article please select a referencing style below: