Simplex Solutions of LLP

The Simplex Method or Simplex Algorithm is used for calculating the optimal solution to the linear programming problem. In other words, the simplex algorithm is an iterative procedure carried systematically to determine the optimal solution from the set of feasible solutions.

Firstly, to apply the simplex method, appropriate variables are introduced in the linear programming problem, and the primary or the decision variables are equated to zero. The iterative process begins by assigning values to these defined variables. The value of decision variables is taken as zero since the evaluation in terms of the graphical approach begins with the origin. Therefore, x1 and x2 is equal to zero.

The decision maker will enter appropriate values of the variables in the problem and find out the variable value that contributes maximum to the objective function and removes those values which give undesirable results. Thus, the value of the objective function gets improved through this method. This procedure of substitution of variable value continues until any further improvement in the value of the objective function is possible.

Following two conditions need to be met before applying the simplex method:

  1. The right-hand side of each constraint inequality should be non-negative. In case, any linear programming problem has a negative resource value, then it should be converted into positive value by multiplying both the sides of constraint inequality by “-1”.
  2. The decision variables in the linear programming problem should be non-negative.

Thus, the simplex algorithm is efficient since it considers few feasible solutions, provided by the corner points, to determine the optimal solution to the linear programming problem.

Any linear programming problem involving two variables can be easily solved with the help of graphical method as it is easier to deal with two dimensional graph. All the feasible solutions in graphical method lies within the feasible area on the graph and we used to test the corner points of the feasible area for the optimal solution i.e. one of the corner points of the feasible area used to be the optimal solution. We used to test all the corner points by putting these value in objective function.

But if number of variables increase from two, it becomes very difficult to solve the problem by drawing its graph as the problem becomes too complex. Simplex method was developed by G.B. Dantzig, an American mathematician.

This method is mathematical treatment of the graphical method. Here also various corner points of the feasible area are tested for optimality. But it is not possible to test all the corner points since number of corner points increase manifolds as the number of equations and variables increases. Maximum number of these points to be tested could be

m + nm = m+1!/ m! n!

where m is number of and n is number of variables.

In simplex method therefore the number of corner points to be tested is reduced considerably by using a very effective algorithm which leads us to optimal solution corner point in only a few iterations. Let us take one example and proceed step by step.

Objective function is to maximize Z= 12x1 + 15x2 + 14x3

Subject to conditions

Simplex Method

Step 1:

(a) Right hand side of all the constraints must be either zero or +ve. If they are -ve then they must be made +ve by multiplying both side by (-1) and sign of inequality would be reversed. In this example, R.H.S. is already +ve or zero.

(b) All the inequalities are converted into equalities by adding or subtracting slack or surplus variables. These slack or surplus variables are introduced because it is easier to deal with equalities than inequalities in mathematical treatment.

If constraint is ≤ type the slack variables are added if constraint is ≥ type then surplus variable is subtracted. Here slack variables s,, s2 and s3 are added in three in equalities (i), (ii) and (iii), we get.

Simplex Method

And objective function becomes

Maximize Z = 12x1 + 15x2 + 14x3 + os1 + os2 + os3

Step 2:

Find initial Basic Feasible Solution:

We start with a feasible solution and then move towards optimal solution in next iterations. Initial feasible solution is preferably chosen to be the origin i.e. where the regular variables e.g. in this case xv x2, x3 assume zero values. i.e. x1 x2, x3 = 0

and we get s1 = 0, s2 = 0 and s3 = 100 from inequalities (i), (ii) and (iii)

Basic Variables are the variables which are presently in the solution e.g., Sv S2 and S3 are the basic variables in the initial solution.

Non Basic Variables are the variables which are set equal to zero and are not in the current solution e.g. x1 and x2 and x3 are the non basic variables in the initial solution.

The above information can be expressed in the Table 1

Objective Function

In the table, first row represents coefficients of objective function, second row represents different variables (first regular variables then slack/surplus variables). Third fourth & fifth row represents coefficients of variables in all the constraints.

First column represents coefficients of basic variables (current solution variables) in the objective (ei) second column represent basic variables (current solution variables) and last column represents, right hand side of the constraints in standard form. I.e. after congesting all the inequalities into equalities, in any table, current values of the current solution variables (basic variables) is given by R.H.S.

In Table 1, current solution is:

S1 =0, S2 = 0, S3 = 100

Of course the non basic variables Xv X2 and X3 will also be zero

Degeneracy whenever any basic variable assumes zero values, the current solution is said to be degenerate as in present problem S1 = 0 and S2 = 0 the problem can be further solved by substituting S1 = t and S2 = t where t is very small +ve number.

Step 3:

Optimality Test.

Optimality test can be performed to find whether current solution is optimal or not. For this first write last row in the form of (Ej) where

Ej = Σei. Aij.

Where aij represents coefficients in the body identity matrix for ith row & jth column, i.e., represents first column of the table. In the last row, write the value of (Cj – Ej) where c; represents the values of first row and Ej represents the values of last row. (Cj – Ej) represents the advantage of bringing any non basic variable to the current solution i.e. by making it basic.

In the Table 2, values of Cj – Ej are 12, 15 and 14 for Xv X2 and X3. If any of the values of Cj – Ej is +ve then it means that most positive values represents the variable which is brought into current solution would increase the objective function to maximum extent. In present case X2 is the potential variable to come into the solution for next iteration. If all the values in (Cj – Ej) row are negative then it means that optimal solution has been reached.

Step 4:

Iterate towards Optimal solution:

Maximum the value of Cj – Ej gives Key coloumn as marked in the Table

Basic Variables Table 2

Therefore X2 is the entering variable i.e. it would become basic and would enter the solution. Out of the existing non basic variable, and one has to go out and become non basic. To find which variable is to be driven out, me divide the coefficients in RHS column by the corresponding coefficients in the key column to get Ratio Column. Now look for the least positive value in the Ratio Column and that would give the key row. In present problem we have three values i.e. t, µ and 100 and out of these, t is the least +ve. Therefore Row corresponding to t in Ratio column would be the key row. And S., would be the leaving variable. Element at the intersection of key row and key coloumn is the key element.

Now all the elements in the key coloumn except the key element is to be made zero and key element is to be made unity. This is done with the help of row operations as done is the matrices. Here key element is already unity and other element in key coloumn are made zero by adding -1 times first row in its third row & get next table.

Basic Variables Table 3

Therefore second feasible solution becomes

X1 = 0, X2 = t and X3 = 0 there by z = 15t

In new table s1 has been replaced by X2 in the basic variables and correspondingly ei coloumn has also been changed.

Step 5:

On looking at Cj -Ej values in Table 3, we find that x1 has most +ve values of 27 thereby indicating that solution can be further improved by bringing x1 into the solution i.e. by making it basic. Therefore x1 coloumn is key column, also find key row as explained earlier and complete Table 5.

Basic Variables Table 5

Key element in Table 5 comes out to be 2 and it is made unity and all other elements in the key coloumn are made zero with the help of row operations and finally we get Table 6. First key element is made unity by dividing that row by 2. Then by adding suitable multiples of that row in another rows we get table 6.

Basic Variables Table 6

It can be seen from table 6 that still solution is not optimal as Cj-Ej has still are +ve values is 1/2 this gives is key coloumn and corresponding key row is found, key element is made as given in Table 7

Basic Variables Table 7

Now by suitable row operations we make other elements in key coloumn is zero as shown in Table 8.

Basic Variables Table 8

It can be seen that since all the values in Cj-Ej row are either -ve or zero optimal solution has been reached.

Final solution is    x1 = 40 tons

x2 = 40 tons

x3 = 20 tons

since t is very small, it is neglected.

Example 1:

Solve the following problem by simplex method

Maximize    Z = 5x1 + 4x2

Subject to    6x1 + 4x2 ≤ 24

x1 + 2x2 ≤ 6

-x1 + x2 ≤ 1

x2 ≤ 2

and x1 x2≥0

Solution:

Add slack variables S1, S2, S3, S4 in the four constraints to remove inequalities.

We get    6x1 + 4x2 + s1 =24

x1 + 2x2+ s2 =6

-x1 + x2 + s3 = 1

x2 + s4 =2

Subject to x1, x2, s1, s2, s3 & s4 > 0

Objective function becomes

Maximize Z = 5x1 + 4x2 0s1 + 0s2 + 0s3 + 0s4

Table 1 which is formed is given below. It can be seen that X1 is the entering variable and S, is the leaving variable. Key element in Table 1 is made unity and all other element in that coloumn are made zero.

clip_image017

Simplex Method

It can be from Table 2 that X2 is the entering variable and S2 is the leaving variable.

Simplex Method

It can be seen from Table 5 that all the values of Cj-Ej row are either -ve or zero. Therefore optimal solution has been obtained.

Solution is x1 = 3, x2 = 3/2

Zmax = 5x3 + 4x = 3/2

= 15 + 6 = 21 Ans.

Big M- Method

Let us take following problem to illustrate the Big M- Method.

Minimize Z = 2y1 + 3y2

subject to constraints y1 + y2 ≥ 5

y1 + 2y2 ≥ 6

y1, y2 ≥ 0

Converting to Standard form:

Maximize Z = -2y1 – 3y2 + Os, + 0s2

i.e. Minimization problem is converted to maximization problem by multiplying R.H.S. of objective function by -1.

Constraints y1 + y2 – s1 =6 …(i)

y, + y2 – s2 =6 …(ii)

y1 y2, s1 s2 ≥ 0

Here surplus variables s1, s2 and subtracted from the constraints (i) and (ii) respectively.

Now y1, y2 can taken as non basic variables and put equal to zero to get sv s2 as basic variables where s1 = -5, s2 = -6.

This is infeasible solution as surplus variables s1 and s2 have get -ve values. In order to overcome this problem we add artificial variables A1, and A2 in eqn. (i) and (ii) respectively to get

y1 + y2 –s1 + A1 =5 …(iii)

y1 + 2y2– s2+ A2 =6 …(iv)

Where y1 y2, s1, s2, A1, A2 ≥ 0

and objective function becomes

Maximize Z1 = -2y1 – 3y2 + 0s1 + 0s2 – MA1 – MA2

It can be observed that we have deliberately applied very heavy penalties to artificial variables in the objective function in the form of -MA1 and MAwhere M is a very large +ve number. Purpose for it is that the artificial variables initially appear in the starting basic solution.

Because artificial variables decrease the objective function to very large extent be case of penalties, simplex algorithm drives the artificial variables out of the solution in the initial iterations and therefore the artificial variables which we introduced to solve the problem further don’t appear in the final solution. Artificial variables are only a computational device. They keep the starting equations in balance and provide a mathematical trick for getting a starting solution.

Initial Table becomes

Simplex Method

Since Cj-Ej is +ve under some columns, solution given by Table 1 is not optimal. It can be seen that out of -2 + 2M and -3 + 3M, -3 + 3M is most +ve as M is a very large +ve number. Key element is found as shown in table 1 and it is made unity and all other elements in this column are made zero. We get Table 2.

Simplex Method

From Table 2, it can be seen that optimal solution is still not reached and better solution exists. Y1 is the incoming variable and A1 is the outgoing variable. We get Table 3.

Simplex Method

It can be seen from Table 3 that optimal solution has reached and solution is

Y1 = 4, Y2 = 1

Minimum Value of Z = 2x4 + 3x1 = 11 units Ans.

Unbounded Solution:

A Linear Programming Problem is said to have unbounded solution when in the ratio column, we get all entries either -ve or zero and there is no +ve entry. This indicates that the value of incoming variable selected from key coloumn can be as large as we like without violating the feasible condition and the problem is said to have unbounded solution.

Infinite Number of Solutions:

A Linear Programming Problem is said to have infinite number of solutions if during any iteration, in Cj-Ej row, we have all the values either zero or -ve. It shows that optimal value has reached. But since one of the regular variables has zero value in Cj-Ej row, it can be concluded that there exists an alternative optimal solution.

Example of this type of table is given below.

Simplex Method

It can be seen that optimal solution has been reached since all values in Cj-Ej row are zero or -ve But X1 is non basic variable and it has zero value in Cj-Ej row, it indicates that X, can be brought into solution, however it will not increase the value of objective function and alternative optimal exists.

Case of No. Feasible Solution:

In some L.P.P. it can be seen that while solving problem with artificial variables, Cj-Ej row shows that optimal solution is reached whereas we still have artificial variable in the current solution having some +vp value. In such situations, it can be concluded that problem does not have any feasible solution at all.

Example 2:

Simplex Method

Solution:

In order to solve the problem, artificial variables will have to be added in L.H.S. so as to get the initial basic feasible solution. Let us introduce artificial variables A1, A2, A3, the above constraints can be written as.

Simplex Method

Now if these artificial variables appear in final solution by having some +ve values then the equality of equation either (i), (ii) or (iii) gets disturbed. Therefore we want that artificial variables should not appear in final solution and therefore we apply large penalty in the objective function, which can be written as.

Maximize Z = Y, + 2Y2 + 3Y3 – Y4 – MA, – MA2 – MA3

Now if we take y1 Y2, y3and y4 as non basic variables and put Y1 = Y2 = y4= 0 then we get initial solution as A1 = 15, A2 = 20 & A3 = 10 and A1, A2, A3 and A4 are basic variables (variables in current solution)to start with. The above information can be put in Table 1.

Simplex Method

Simplex Method

AS Cj- Ej is positive, the current solution is not optimal and hence better solution exists.

Iterate towards an optimal solution

Performing iterations to get an optimal solution as shown in Table below

Simplex Method

Simplex Method

Simplex Method

Simplex Method

Simplex Method

clip_image038

Simplex Method

Since Cj-Ej is either zero or negative under all columns, the optimal basic feasible solution has been obtained. Optimal values are

Y1 = 5/2, Y2 = 5/2, Y3 = 5/2, Y4 = 0

Also A1 = A2 = 0 and Zmax = 15 Ans.

Quantitative Techniques introduction

Decision making is one of the most fundamental functions of management professionals. Every manager has to take decisions pertaining to his field of work. Hence, it is an all-pervasive function of basic management. The process of decision making contains various methods. Quantitative techniques of decision making help make these methods simpler and more efficient.

The following are six such important quantitative techniques of decision making:

  1. Linear programming

This technique basically helps in maximizing an objective under limited resources. The objective can be either optimization of a utility or minimization of a disutility. In other words, it helps in utilizing a resource or constraint to its maximum potential.

Managers generally use this technique only under conditions involving certainty. Hence, it might not be very useful when circumstances are uncertain or unpredictable.

  1. Probability decision theory

This technique lies in the premise that we can only predict the probability of an outcome. In other words, we cannot always accurately predict the exact outcome of any course of action.

Managers use this approach to first determine the probabilities of an outcome using available information. They can even rely on their subjective judgment for this purpose. Next, they use this data of probabilities to make their decisions. They often use ‘decision trees’ or pay-off matrices for this purpose.

  1. Game theory

Sometimes, managers use certain quantitative techniques only while taking decisions pertaining to their business rivals. The game theory approach is one such technique.

This technique basically simulates rivalries or conflicts between businesses as a game. The aim of managers under this technique is to find ways of gaining at the expense of their rivals. In order to do this, they can use 2-person, 3-person or n-person games.

  1. Queuing theory

Every business often suffers waiting for periods or queues pertaining to personnel, equipment, resources or services.

For example, sometimes a manufacturing company might gather a stock of unsold goods due to irregular demands. This theory basically aims to solve such problems.

The aim of this theory is to minimize such waiting periods and also reduce investments on such expenses.

For example, departmental stores often have to find a balance between unsold stock and purchasing fresh goods. Managers in such examples can employ the queuing theory to minimize their expenses.

  1. Simulation

As the name suggests, the simulation technique observes various outcomes under hypothetical or artificial settings. Managers try to understand how their decisions will work out under diverse circumstances.

Accordingly, they finalize on the decision that is likely to be the most beneficial to them. Understanding outcomes under such simulated environments instead of natural settings reduces risks drastically.

  1. Network techniques

Complex activities often require concentrated efforts by personnel in order to avoid wastage of time, energy and money. This technique aims to solve this by creating strong network structures for work.

There are two very important quantitative techniques under this approach. These include the Critical Path Method and the Programme Evaluation & Review Technique. These techniques are effective because they segregate work efficiently under networks. They even drastically reduce time and money.

Scope

The scope of statistics was primarily limited in the sense that the ruling kings used to collect data so as to frame suitable military and fiscal policies only. Hence they heavily depended upon statistics. As time went on, statistics came to be regarded as a method of handling and analyzing the numerical facts and figures.

In recent years, the activities of the state have increased tremendously. Statistical facts and figures are of immense help in promoting human welfare. Today, the scope of statistics is so vast and ever expanding. It influences everybody’s life. Even an entry into the world and exit are systematically recorded.

There is no branch of human activity that can escape the attention of statistics. It is a tool of all sciences. It is indispensable for research and intelligent judgment. It has become a recognized discipline in its own right. A few specific areas of application are mentioned below.

  • Finance and Accounting: Cash flow analysis, Capital budgeting, Dividend and Portfolio management, Financial planning.
  • Marketing Management: Selection of product mix, Sales resources allocation and Assignments.
  • Production Management: Facilities planning, Manufacturing, Aggregate planning, Inventory control, Quality control, Work scheduling, Job sequencing, Maintenance and Project planning and scheduling.
  • Personnel Management: Manpower planning, Resource allocation, Staffing, Scheduling of training programs.
  • General Management: Decision Support System and Management of Information Systems, MIS, Organizational design and control, Software Process Management and Knowledge Management.

Operations Research Techniques

(i) Inventory Control Models:

Operation Research study involves balancing inventory costs against one or more of the following costs:

  1. Shortage costs.
  2. Ordering costs.
  3. Storage costs.
  4. Interest costs.

This study helps in taking decisions about:

  1. How much to purchase.
  2. When to order.
  3. Whether to manufacture or to purchase i.e., make and buy decisions.

The most well-known use is in the form of Economic Order Quantity equation for finding economic lot size.

(ii) Waiting Line Models:

These models are used for minimising the waiting time and idle time together with the costs associated therewith.

Waiting line models are of two types:

(a) Queuing theory, which is applicable for determining the number of service facilities and/or the timing of arrivals for servicing.

(b) Sequencing theory which is applicable for determining the sequence of the servicing.

(iii) Replacement Models:

These models are used for determining the time of replacement or maintenance of item, which may either:

(i) Become obsolete, or

(ii) Become inefficient for use, and

(iii) Become beyond economical to repair or maintain.

(iv) Allocation Models:

(a) There are number of activities which are to be performed and there are number of alternative ways of doing them,

(b) The resources or facilities are limited, which do not allow each activity to be performed in best possible way. Thus these models help to combine activities and available resources so as to optimise and get a solution to obtain an overall effectiveness.

(v) Competitive Strategies:

Such type of strategies are adopted where, efficiency of deci­sion of one agency is dependent on the decision of another agency. Examples of such strategies are game of cards or chess, fixing of prices in a competitive market where these strategies are termed as “theory”.

(vi) Linear Programming Technique:

These techniques are used for solving operation problems having many variables subject to certain restrictions. In such problems, objectives are profit, costs, quantities manufactured etc. whereas restrictions may be e.g. policies of government, capacity of the plant, demand of the product, availability of raw materials, water or power and storage capacity etc.

(vii) Sequencing Models:

These are concerned with the selection of an appropriate sequence of performing a series of jobs to be done on a service facility or machine so as to optimise some efficiency measure of performance of the system.

(viii) Simulation Models:

Simulation is an experimental method used to study behaviour over time.

(ix) Network Models:

This is an approach to planning, scheduling and controlling complex projects.

Applications of Operation Research:

These techniques are applied to a very wide range of problems.

(i) Distribution or Transportation Problems:

In such problems, various centres with their demands are given and various warehouses with their stock positions are also known, then by using linear programming technique, we can find out most economical distribution of the products to various centres from various warehouses.

(ii) Product Mix:

These techniques can be applied to determine best mix of the products for a plant with available resources, so as to get maximum profit or minimum cost of produc­tion.

(iii) Production Planning:

These techniques can also be applied to allocate various jobs to different machines so as to get maximum profit or to maximise production or to minimise total production time.

(iv) Assignment of Personnel:

Similarly, this technique can be applied for assignment of different personnel with different aptitude to different jobs so as to complete the task within a minimum time.

(v) Agricultural Production:

We can also apply this technique to maximise cultivator’s profit, involving cultivation of number of items with different returns and cropping time in different type of lands having variable fertility.

(vi) Financial Applications:

Many financial decision making problems can be solved by using linear programming technique.

Some of them are:

(i) To select best portfolio in order to maximise return on investment out of alternative investment opportunities like bonds, stocks etc. Such problems are generally faced by the managers of mutual funds, banks and insurance companies.

(ii) In deciding financial mix strategies, involving the selection of means for financing firm, projects, inventories etc.

Limitations of Operations Research:

  1. These do not take into account qualitative and emotional factors.
  2. These are applicable to only specific categories of decision-making problems.
  3. These are required to be interpreted correctly.
  4. Due to conventional thinking, changes face lot of resistance from workers and some­times even from employer.
  5. Models are only idealised representation of reality and not be regarded as absolute.

Scientific approach in Decision Making and their Limitations

In order to evaluate the alternatives, certain quantitative techniques have been developed which facilitate in making objective decisions.

Important decision-making techniques are four and they have been discussed as under:

(1) Marginal Analysis:

This technique is also known as ‘marginal costing’. In this technique the additional revenues from additional costs are compared. The profits are considered maximum at the point where marginal revenues and marginal costs are equal.” This technique can also be used in comparing factors other than costs and revenues.

For Example – If we try to find out the optimum output of a machine, we have to vary inputs against output until the additional inputs equal the additional output. This will be the point of maximum efficiency of the machine. Modern analysis is the ‘Break-Even Point’ (BEP) which tells the management the point of production where there is no profit and no loss.

(2) Co-Effectiveness Analysis:

This analysis may be used for choosing among alternatives to identify a preferred choice when objectives are far less specific than those expressed by such clear quantities as sales, costs or profits. Koontz, O’Donnell and Weihrich have written that “Cost models may be developed do show cost estimates for each alternative and its effectiveness. Social objective may be to reduce pollution of air and water which lacks precision. Further, he has emphasised for synthesizing model i.e., combining these results, may be made to show the relationships of costs and effectiveness for each alternative.”

(3) Operations Research:

This is a scientific method of analysis of decision problems to provide the executive the needed quantitative information in making these decisions. The important purpose of this is to provide the managers with scientific basis for solving organisational problems involving the interaction of components of the organisation. This seeks to replace the process by an analytic, objective and quantitative basis based on information supplied by the system in operation and possibly without disturbing the operation.

This is widely used in modern business organisations. For Example – (a) Inventory models are used to control the level of inventory, (b) Linear Programming for allocation of work among individuals in the organisation.

Further, some theories have also been propounded by eminent writers of management to analyse the problems and to take decisions. Sequencing theory helps the management to determine the sequence of particular operations. Queuing theory, Games theory, Reliability theory and Marketing theory are also important tools of operations research which can be used by the management to analyse the problems and take decisions.

(4) Linear Programming:

It is a technique applicable in areas like production planning, transportation, warehouse location and utilisation of production and warehousing facilities at an overall minimum cost. It is based on the assumption that there exists a linear relationship between variables and that the limits of variations can be ascertained.

It is a method used for determining the optimum combination of limited resources to achieve a given objective. It involves maximisation or maximisation of a linear function of various primary variables known as objective function subject to a set of some real or assumed restrictions known as constraints.

Models represent the behaviour and perception of decision-makers in the decision-making environment. There are two models that guide the decision-making behaviour of managers.

These are:

  1. Rational/Normative Model-Economic Man
  2. Non-Rational/Administrative Model
  3. Rational/Normative Model:

These models believe that decision-maker is an economic man as defined in the classical theory of management. He is guided by economic motives and self-interest. He aims to maximise organisational profits. Behavioural or social aspects are ignored in making business decisions.

These models presume that decision-makers are perfect information assimilators and handlers. They can collect complete and reliable information about the problem area, generate all possible alternatives, know the outcome of each alternative, rank them in the best order of priority and choose the best solution. They follow a rational decision-­making process and, therefore, make optimum decisions.

This model is based on the following assumptions:

  1. Managers have clearly defined goals. They know what they want to achieve.
  2. They can collect complete and reliable information from the environment to achieve the objectives.
  3. They are creative, systematic and reasoned in their thinking. They can identify all alternatives and outcome of each alternative related to the problem area.
  4. They can analyse all the alternatives and rank them in the order of priority.
  5. They are not constrained by time, cost and information in making decisions.
  6. They can choose the best alternative to make maximum returns at minimum cost.

Group decision-making suffers from the following limitations:

(a) It is costly and more time consuming than individual decision-making.

(b) Some members accept group decisions even when they do not agree with them to avoid conflicts.

(c) Sometimes, groups do not arrive at any decision. Disagreement and disharmony amongst group members leads to interpersonal conflicts.

(d) Some group members dominate others to agree to their viewpoint. Social pressures lead to acceptance of alternatives which all group members do not unanimously agree to.

(e) If there is conflict between group goals and organisational goals, group decisions generally promote group goals even if they are against the interest of the organisation.

Though cost of group decision-making is more than individual decision-making, its benefits far outweigh the costs and enable the managers to make better decisions.

Mathematical expectations

Mathematical expectation, also known as the expected value, which is the summation of all possible values from a random variable.

It is also known as the product of the probability of an event occurring, denoted by P(x), and the value corresponding with the actually observed occurrence of the event.

For a random variable expected value is a useful property. E(X) is the expected value and can be computed by the summation of the overall distinct values that is the random variable. The mathematical expectation is denoted by the formula:

E(X)= Σ (x1p1, x2p2, …, xnpn),

where, x is a random variable with the probability function, f(x),

p is the probability of the occurrence,

and n is the number of all possible values.

The mathematical expectation of an indicator variable can be 0 if there is no occurrence of an event A, and the mathematical expectation of an indicator variable can be 1 if there is an occurrence of an event A.

For example, a dice is thrown, the set of possible outcomes is { 1,2,3,4,5,6} and each of this outcome has the same probability 1/6. Thus, the expected value of the experiment will be 1/6*(1+2+3+4+5+6) = 21/6 = 3.5. It is important to know that “expected value” is not the same as “most probable value” and, it is not necessary that it will be one of the probable values.

Properties of Expectation

  1. If X and Y are the two variables, then the mathematical expectation of the sum of the two variables is equal to the sum of the mathematical expectation of X and the mathematical expectation of Y.

Or

E(X+Y)=E(X)+E(Y)

  1. The mathematical expectation of the product of the two random variables will be the product of the mathematical expectation of those two variables, but the condition is that the two variables are independent in nature. In other words, the mathematical expectation of the product of the nnumber of independent random variables is equal to the product of the mathematical expectation of the independent random variables

Or

E(XY)=E(X)E(Y)

  1. The mathematical expectation of the sum of a constant and the function of a random variable is equal to the sum of the constant and the mathematical expectation of the function of that random variable.

Or,

E(a+f(X))=a+E(f(X)),

where, a is a constant and f(X) is the function.

  1. The mathematical expectation of the sum of product between a constant and function of a random variable and the other constant is equal to the sum of the product of the constant and the mathematical expectation of the function of that random variable and the other constant.

Or,

E(aX+b)=aE(X)+b,

where, a and b are constants.

  1. The mathematical expectation of a linear combination of the random variables and constant is equal to the sum of the product of  ‘n’ constant and the mathematical expectation of the ‘n’ number of variables.

Or

E(∑aiXi)=∑ aE(Xi)

Where, ai, (i=1…n) are constants.

Scope and Applications of Quantitative Techniques

The scope of statistics was primarily limited in the sense that the ruling kings used to collect data so as to frame suitable military and fiscal policies only. Hence they heavily depended upon statistics. As time went on, statistics came to be regarded as a method of handling and analyzing the numerical facts and figures.

In recent years, the activities of the state have increased tremendously. Statistical facts and figures are of immense help in promoting human welfare. Today, the scope of statistics is so vast and ever expanding. It influences everybody’s life. Even an entry into the world and exit are systematically recorded.

There is no branch of human activity that can escape the attention of statistics. It is a tool of all sciences. It is indispensable for research and intelligent judgment. It has become a recognized discipline in its own right. A few specific areas of application are mentioned below.

  • Finance and Accounting: Cash flow analysis, Capital budgeting, Dividend and Portfolio management, Financial planning.
  • Marketing Management: Selection of product mix, Sales resources allocation and Assignments.
  • Production Management: Facilities planning, Manufacturing, Aggregate planning, Inventory control, Quality control, Work scheduling, Job sequencing, Maintenance and Project planning and scheduling.
  • Personnel Management: Manpower planning, Resource allocation, Staffing, Scheduling of training programs.
  • General Management: Decision Support System and Management of Information Systems, MIS, Organizational design and control, Software Process Management and Knowledge Management.

Applications

Applications of Quantitative Analysis in the Business Sector

Business owners are often forced to make decisions under conditions of uncertainty. Luckily, quantitative techniques enable them to make the best estimates and thus minimize the risks associated with a particular decision. Ideally, quantitative models provide company owners with a better understanding of information, to enable them to make the best possible decisions.

Project Management

One area where quantitative analysis is considered an indispensable tool is in project management. As mentioned earlier, quantitative methods are used to find the best ways of allocating resources, especially if these resources are scarce. Projects are then scheduled based on the availability of certain resources.

Production Planning

Quantitative analysis also helps individuals to make informed product-planning decisions. Let’s say a company is finding it challenging to estimate the size and location of a new production facility. Quantitative analysis can be employed to assess different proposals for costs, timing, and location. With effective product planning and scheduling, companies will be more able to meet their customers’ needs while also maximizing their profits.

Marketing

Every business needs a proper marketing strategy. However, setting a budget for the marketing department can be tricky, especially if its objectives are not set. With the right quantitative method, marketers can find an easy way of setting the required budget and allocating media purchases. The decisions can be based on data obtained from marketing campaigns.

Finance

The accounting department of a business also relies heavily on quantitative analysis. Accounting personnel use different quantitative data and methods such as the discounted cash flow model to estimate the value of an investment. Products can also be evaluated, based on the costs of producing them and the profits they generate.

Purchase and Inventory

One of the greatest challenges that businesses face is being able to predict the demand for a product or service. However, with quantitative techniques, companies can be guided on just how many materials they need to purchase, the level of inventory to maintain, and the costs they’re likely to incur when shipping and storing finished goods.

The Bottom Line

Quantitative analysis is the use of mathematical and statistical techniques to assess the performance of a business. Before the advent of quantitative analysis, many company directors based their decisions on experience and gut. Business owners can now use quantitative methods to predict trends, determine the allocation of resources, and manage projects.

Baye’s Theorem

Bayes’ Theorem is a way to figure out conditional probability. Conditional probability is the probability of an event happening, given that it has some relationship to one or more other events. For example, your probability of getting a parking space is connected to the time of day you park, where you park, and what conventions are going on at any time. Bayes’ theorem is slightly more nuanced. In a nutshell, it gives you the actual probability of an event given information about tests.

“Events” Are different from “tests.” For example, there is a test for liver disease, but that’s separate from the event of actually having liver disease.

Tests are flawed:

Just because you have a positive test does not mean you actually have the disease. Many tests have a high false positive rate. Rare events tend to have higher false positive rates than more common events. We’re not just talking about medical tests here. For example, spam filtering can have high false positive rates. Bayes’ theorem takes the test results and calculates your real probability that the test has identified the event.

Bayes’ Theorem (also known as Bayes’ rule) is a deceptively simple formula used to calculate conditional probability. The Theorem was named after English mathematician Thomas Bayes (1701-1761). The formal definition for the rule is:

In most cases, you can’t just plug numbers into an equation; You have to figure out what your “tests” and “events” are first. For two events, A and B, Bayes’ theorem allows you to figure out p(A|B) (the probability that event A happened, given that test B was positive) from p(B|A) (the probability that test B happened, given that event A happened). It can be a little tricky to wrap your head around as technically you’re working backwards; you may have to switch your tests and events around, which can get confusing. An example should clarify what I mean by “switch the tests and events around.”

Bayes’ Theorem Example

You might be interested in finding out a patient’s probability of having liver disease if they are an alcoholic. “Being an alcoholic” is the test (kind of like a litmus test) for liver disease.

A could mean the event “Patient has liver disease.” Past data tells you that 10% of patients entering your clinic have liver disease. P(A) = 0.10.

B could mean the litmus test that “Patient is an alcoholic.” Five percent of the clinic’s patients are alcoholics. P(B) = 0.05.

You might also know that among those patients diagnosed with liver disease, 7% are alcoholics. This is your B|A: the probability that a patient is alcoholic, given that they have liver disease, is 7%.

Bayes’ theorem tells you:

P(A|B) = (0.07 * 0.1)/0.05 = 0.14

In other words, if the patient is an alcoholic, their chances of having liver disease is 0.14 (14%). This is a large increase from the 10% suggested by past data. But it’s still unlikely that any particular patient has liver disease.

Statistical Research Techniques

Data Analysis can be defined as the process of reviewing and evaluating the data that is gathered from different sources.  Data cleaning is very important as this will help in eliminating the redundant information and reaching to the accurate conclusions. Data analysis is the systematic process of cleaning, inspecting and transforming data with the help of various tools and techniques. The objective of data analysis is to identify the useful information which will support the decision-making process. There are various methods for data analysis which includes data mining, data visualization and Business Intelligence. Analysis of data will help in summarizing the results through examination and interpretation of the useful information. Data analysis helps in determining the quality of data and developing the answers to the questions which are of use to the researcher.

In order to discover the solution of the problem and to reach to the specific and quality results, various statistical techniques can be applied. These techniques will help the researcher to get accurate results by drawing relationships between different variables. The statistical techniques can mainly be divided into two

A) Parametric Test

B) Non-Parametric Test

Parametric Test

Parametric statistics considers that the sample data relies on certain fixed parameters. It takes into consideration the properties of the population. It assumes that the sample data is collected from the population and population is normally distributed. There are equal chances of occurrence of all the data present in the population. The parametric test is based on various assumptions which are needed to be holding good. Various parametric tests are Analysis of Variance (ANOVA), Z test, T test, Chi Square test, Pearson’s coefficient of correlation, Regression analysis.

T- Test

T- test can be defined as the test which helps in identifying the significant level of difference in a sample mean or between the means of two samples. It is also called as a T- Distribution. The t-test is conducted when the sample size of the population is small, and variance of the population is not known. The t-test is used when the population (n) is not larger than 30. There are two types of T-Test:

  1. Dependent mean T Test- It is used when same variables or groups are experimented.
  2. Independent mean T Test-It is used when two different groups experimented. The two different groups have faced different conditions.

The formula for T-Test is:

Z Test

This test is used when the population is normally distributed. The sample size of the population is large or small, but the variance of the population is known. It is used for comparing the means of the population or for identifying the significance level of difference between the means of two independent samples. Z test is based on the single critical value which makes the test more convenient.

The formula for z test is:

X- Main value

µ – Sample Mean

σ – Standard Deviation

Analysis of Variance (ANOVA)

When there are two or more categorical data, then Analysis of Variance is used. Analysis of variance can be mainly of two types a) one-way ANOVA, b) Two-way ANOVA. One way ANOVA is used when the mean of three or more than three groups are compared. The variables in each group are same. Two-way ANOVA is used to discover if there is any relationship between two independent variables and dependent variables. Analysis of Variance is based on many assumptions. ANOVA assumes that there is a dependent variable which can be measured at continuous intervals. There are independent variables which are categorical, and there should be at least two categories. It also assumes that the population is normally distributed and there is no unusual element is present.

Chi Square Test

This test is also known as Pearson’s chi-square test. This test is used to find a relationship between two or more independent categorical variables. The two variables should be measured at the categorical level and should consist of two or more independent groups.

Coefficient of Correlation

Pearson’s coefficient of correlation is used to draw an association between two variables. It is denoted by ‘r’. The value of r ranges between +1 to -1. The coefficient of correlation is used to identify whether there is a positive association, negative association or no association between two variables. When the value is 0, it indicates that there is no association between two variables. When it is less than 0, it indicates a negative association, and when the value is more than 0, then it indicates a positive association.

Regression Analysis

This is used to measure the value of one variable which is based on the value of another variable. The variable whose value is predicted is the dependent variable, and the variable which is used to predict the value of another variable is called independent variable. The assumptions of regression analysis are that the variables should be measured at the continuous level and there should be a linear relationship between two variables.

Non-Parametric Tests

Non-Parametric Statistics does not take into account any assumption relating to the parameters of the population. It explains that data is ordinal and is not necessary to be normally distributed. The non-parametric test is also known as a distribution-free test. These tests are comparatively simpler than the parametric test. Various non-parametric tests include Fisher- Irwin Test, Wilcoxon Matched –Pairs Test (Signed rank test), Wilcoxon rank-sum test, Kruskal- Wallis Test, Spearman’s Rank Correlation test.

Probability Distribution

Probability theory is the foundation for statistical inference.  A probability distribution is a device for indicating the values that a random variable may have.  There are two categories of random variables.  These are discrete random variables and continuous random variables.

Discrete random variable

The probability distribution of a discrete random variable specifies all possible values of a discrete random variable along with their respective probabilities.

Examples can be

  • Frequency distribution
  • Probability distribution (relative frequency distribution)
  • Cumulative frequency

Examples of discrete probability distributions are the binomial distribution and the Poisson distribution.

Binomial Distribution

A binomial experiment is a probability experiment with the following properties.

  1. Each trial can have only two outcomes which can be considered success or failure.
  2. There must be a fixed number of trials.
  3. The outcomes of each trial must be independent of each other.
  4. The probability of success must remain the same in each trial.

The outcomes of a binomial experiment are called a binomial distribution.

Poisson Distribution

The Poisson distribution is based on the Poisson process.  

  1. The occurrences of the events are independent in an interval.
  2. An infinite number of occurrences of the event are possible in the interval.
  3. The probability of a single event in the interval is proportional to the length of the interval.
  4. In an infinitely small portion of the interval, the probability of more than one occurrence of the event is negligible.

Continuous probability distributions

A continuous variable can assume any value within a specified interval of values assumed by the variable.  In a general case, with a large number of class intervals, the frequency polygon begins to resemble a smooth curve.

A continuous probability distribution is a probability density function.  The area under the smooth curve is equal to 1 and the frequency of occurrence of values between any two points equals the total area under the curve between the two points and the x-axis.

The Normal Distribution

The normal distribution is the most important distribution in biostatistics.  It is frequently called the Gaussian distribution.  The two parameters of the normal distribution are the mean (m) and the standard deviation (s).  The graph has a familiar bell-shaped curve.

Graph of a Normal Distribution

Characteristics of the normal distribution

  1. It is symmetrical about m.
  2. The mean, median and mode are all equal.
  3. The total area under the curve above the x-axis is 1 square unit.  Therefore 50% is to the right of m and 50% is to the left of m.
  4. Perpendiculars of:
     ± s contain about 68%; 
        ±2 s contain about 95%;
        ±3 s contain about 99.7%
    of the area under the curve.

The standard normal distribution

A normal distribution is determined by m and s.  This creates a family of distributions depending on whatever the values of m and s are.  The standard normal distribution has m =0 and s =1.

 Finding normal curve areas

  1. The table gives areas between – and the value of .  
  2. Find the z value in tenths in the column at left margin and locate its row.  Find the hundredths place in the appropriate column.
  3. Read the value of the area (P) from the body of the table where the row and column intersect.  Note that P is the probability that a given value of z is as large as it is in its location.  Values of P are in the form of a decimal point and four places.  This constitutes a decimal percent.

Finding probabilities

We find probabilities using the table and a four-step procedure as illustrated below.

a) What is the probability that z < -1.96?

    (1) Sketch a normal curve
    (2) Draw a line for z = -1.96
    (3) Find the area in the table
    (4) The answer is the area to the left of the line P(z < -1.96) = .0250

b)  What is the probability that -1.96 < z < 1.96?

    (1) Sketch a normal curve
    (2) Draw lines for lower z = -1.96, and upper z = 1.96
    (3) Find the area in the table corresponding to each value
    (4) The answer is the area between the values–subtract lower from upper P(-1.96 < z < 1.96) = .9750 – .0250 = .9500

c)  What is the probability that z > 1.96?

    (1) Sketch a normal curve
    (2) Draw a line for z = 1.96
    (3) Find the area in the table
    (4) The answer is the area to the right of the line; found by subtracting table value from 1.0000; P(z > 1.96) =1.0000 – .9750 = .0250

Applications of the Normal distribution

The normal distribution is used as a model to study many different variables.  We can use the normal distribution to answer probability questions about random variables.  Some examples of variables that are normally distributed are human height and intelligence.

Solving normal distribution application problems

In this explanation we add an additional step.  Following the model of the normal distribution, a given value of x must be converted to a z score before it can be looked up in the z table.

(1) Write the given information
(2) Sketch a normal curve
(3) Convert x to a z score
(4) Find the appropriate value(s) in the table
(5) Complete the answer

Illustrative Example:  Total fingerprint ridge count in humans is approximately normally distributed with mean of 140 and standard deviation of 50.  Find the probability that an individual picked at random will have a ridge count less than 100.  We follow the steps to find the solution.

(1) Write the given information

m = 140
s = 50
x = 100

(3) Convert x to a z score

          

(4) Find the appropriate value(s) in the table

    A value of z = -0.8 gives an area of .2119 which corresponds to the probability P (z < -0.8)

(5) Complete the answer

    The probability that x is less than 100 is .2119.

Business Finance

Money required for carrying out business activities is called business finance. Almost all business activities require some finance. Finance is needed to establish a business, to run it to modernize it to expand or diversify it. It is required for buying a variety of assets, which may be tangible like machinery, furniture, factories, buildings, offices or intangible such as trademarks, patents, technical expertise etc.

Also, finance is central to run a day to day operations of business like buying materials, paying bills, salaries, collecting cash from customers etc needed at every stage in the life of a business entity. Availability of adequate finance is very crucial for survival and growth of a business.

The Scope of Business Finance

Scope means the research or study that is covered by a subject. The scope of Business Finance is hence the broad concept. Business finance studies, analyses and examines wide aspects related to the acquisition of funds for business and allocates those funds. There are various fields covered by business finance and some of them are:

  1. Financial planning and control

A business firm must manage and make their financial analysis and planning. To make these planning’s and management, the financial manager should have the knowledge about the financial situation of the firm. On this basis of information, he/she regulates the plans and managing strategies for a future financial situation of the firm within a different economic scenario.

The financial budget serves as the basis of control over financial plans. The firms on the basis of budget find out the deviation between the plan and the performance and try to correct them.  Hence, business finance consists of financial planning and control.

  1. Financial Statement Analysis

One of the scopes of business finance is to analyze the financial statements. It also analyses the financial situations and problems that arise in the promotion of the business firm. This statement consists of the financial aspect related to the promotion of new business, administrative difficulties in the way of expansion, necessary adjustments for the rehabilitation of the firm in difficulties.

  1. Working capital Budget

The financial decision making that relates to current assets or short-term assets is known as working capital management. Short-term survival is a requirement for long-term success and this is the important factor in a business. Therefore, the current assets should be efficiently managed so that the business won’t suffer any inadequate or unnecessary funds locked up in the future. This aspect implies that the individual current assets such as cash, receivables, and inventory should be very efficiently managed.

Nature and Significance of Business Finance

Business is related to production and distribution of goods and services for the fulfillment and requirements of society. For effectively carrying out various activities, business requires finance which is called business finance. Hence, business finance is called the lifeblood of any business a business would get stranded unless there are sufficient funds available for utilization. The capital invested by the entrepreneur to set up a business is not sufficient to meet the financial requirements of a business.

The following characteristics can be derived from the definitions.

  1. Business finance comprises of all types of funds namely short, medium and long term used in business.
  2. All types of organisations namely small, medium and large enterprises require business finance.
  3. The volume of business finance required varies from one business enterprise to another depending upon its nature and size. In other words, small and medium enterprises require relatively lower level of business finance than the large scale enterprises.
  4. The amount of business finance required differs from one period to another. In other words the requirement of business finance is heavy during the peak season while it is at low level during the dull season.
  5. The amount of business finance determines the scale of operations of business enterprises.

Significance of Business Finance

Business enterprise can function effectively and efficiently only with adequate business finance. It cannot expand its business operations without business finance. The success of any business firm depends, to   a larger extent, on the manner in which it mobilizes, uses and disburses its funds.

The following points highlight the significance of business finance.

  1. A firm with adequate business finance can easily start any business venture.
  2. Business finance helps the business organisation to purchase raw materials from the supplier easily to produce goods.
  3. The business firm can meet financial liabilities like prompt payment of salary and wages, expenses, etc., in time with the help of sound financial support.
  4. The sound financial support enables the enterprises to meet any unexpected or uncertain risks arising from business environment efficiently. For example economic slowdown, trade cycles, severe competition, shift in consumer preference, etc.
  5. Sound financial position empowers the enterprise to attract talented man power and introduce latest technology.
error: Content is protected !!