MIS for Human Resource Management

Human Resource Information System or HRIS manages the hr policies, processes, and people in an organization. Its powerful set of features work together to tackle the multi-faceted complexities of HR processes. Every HRIS comes with an array of so-called ‘indispensable features’ that are said to cover other vital processes and services.

However, only a few of the HR software features are even marginally useful for SMBs. Most of them are mere marketing tactics. So, it is essential to tread carefully while choosing an HRIS vendor.

Nine core functionalities

  1. Centralized Database

With an automated database that collects, stores, and displays up-to-date, consistent information about the personnel, policies, and procedures in an organization, HR leaders can finally break up with spreadsheets and paper files.

A centralized database that is seamlessly integrated with other HR modules will not just facilitate virtualization but also offer great accessibility to all end-users. Any updates or changes made to the master database will reflect immediately across all modules, saving a considerable amount of time and effort human resources department put into matching and duplicating all records manually.

  1. Recruitment Automation

This feature offers a complete applicant tracking system with a reporting engine to analyze trends and patterns in recruitment. It also seamlessly integrates with job-portals, internal website, and employment-service providers to reduce the chaos in hiring.

  1. Employee Onboarding

With an employee self-service portal that is connected to a cloud-based database removes the necessary evil paperwork out of the onboarding equation. Employees can complete the onboarding process using scanned documents, automated workflow, predefined checklist, and easy-to-use digital forms in 15 minutes.

  1. Talent Management

Employees are the most valuable resources in any organization. However, the process of talent management, i.e, attracting, recruiting, engaging, developing, and retaining employees is a complicated process. Also, employee turnover cost is expensive. An HRIS with an exclusive talent management system will help the organization take better care of their employees.

  1. Payroll

Several SMBs already use a stand-alone system to manage their payroll process. And, no one is eager to switch it to an HRIS. Also, there may be concerns as to whether the payroll module in an HRIS is efficient enough to handle all the full spectrum of services like a dedicated payroll system.

In addition, an HRIS integrates seamlessly with the accounting module and attendance management system. So, the need for manual reconciliation is eliminated, reducing the room for manual errors and legal/financial compliance issues caused by them.

  1. Employee Self-Service

SMBs often find it hard to keep their employee-related data updated. Employee Self-Service (ESS) is an effective way to manage this problem. Giving employees access to view and manage their personal information (profile, time off, benefits, or payroll) can reduce the time HR staff spend on mundane clerical tasks.

Employees don’t have to engage in a mail chase to retrieve their leave balance or payslips. With a self-service portal, every HR process from employee onboarding to reporting will become more efficient. If the HRIS has multi-channel accessibility, employees can view, edit, and retrieve all work-related information right from their mobile phones.

  1. Time and Absence Management

Managing employee timesheets, schedules, and tracking attendance manually involves an immense amount of HR labor. Keeping up with leave request emails and tracking employee absence while sketching out a schedule to manage the changing workload is a huge ordeal. On top of that, exporting all attendance data to the payroll system is time-consuming and tedious.

If not handled properly, timesheets and vacation requests have the potential to stir up a lot of trouble. Mishandled time-off requests can leave a bad impression on the quality of life in the organization, and reduce employee satisfaction. So, streamlining the timesheet management and time-off process with an HRIS can control the manual errors and prevent possible disasters.

  1. Training

A training module can enable organizations to offer blended training experience to their staff to improve engagement, job satisfaction, and retention. They can also plan, track, and measure the impact of their training program to ensure its effectiveness.

  1. Succession Planning

This particular module enables organizations to map talent pipeline and rankings. Once key positions are identified, it would be easy to create employee-specific development plans.

Replacing of items that deteriorate with Time

Reasons for Replacement of Equipment’s:

Equipment are generally considered for replacement for the following reasons:

(i) Deterioration:

It is the decline in performance due to wear and tear or misalignment indicated by;

(i) Increase in maintenance costs.

(ii) Reduction in product quality and rate of production.

(iii) Increase in labour costs, and

(iv) Loss of operating time due to breakdowns.

(ii) Obsolescence:

Technology is progressing fast, newer and better equipment are being developed and produced every year.

The equipment gets obsolete due to advancement in technology and the unwarranted manufacturing costs arising from such obsolete equipment will:

(i) Reduce profits.

(ii) Impair competition.

(iii) Cause loss in value of machinery.

(iii) Inadequacy:

When the existing equipment becomes inadequate to meet the demand or it is not able to increase the production rate to desired level, the question of replacement arises.

(iv) Working Conditions:

It may be thought of replacing the old equipment and machinery which creates unpleasantness i.e. give rise to unsafe conditions for workers and leads to accidents, making the environment noisy and smoky etc.

(v) Economy:

The existing units/equipment have outlived their effective life and it is not economical to continue with them.

Factors Necessary for Replacement of Equipment:

The factors which necessitate the replacement of machinery and equipment can be classified as:

(i) Technical Factors.

(ii) Financial or Cost Factors.

(iii) Tangible Factors.

(i) Technical Factors:

They tend to consider:

(i) Whether the present equipment has become obsolete due to technological developments,

(ii) If the present equipment is inadequate in meeting increased product demand.

(iii) Whether the present equipment has deteriorated due to wear and tear. It may be indicated by increase in maintenance costs, reduction in product quality, rate of output, and increase in labour cost and down time etc.

(iv) Reduced safety as compared to new machine available/developed.

(v) Can the present equipment provide desired surface finish?

(vi) If the present equipment is polluting or spoiling working condition of the industry.

(vii) Possibility of performing additional operations by new machine.

(viii) Does the present equipment make noise and vibrations and thus causing diversion of the workers.

(ix) How often the present equipment requires maintenance and repairs.

(ii) Financial/Cost Factors:

These are:

(i) High repair and maintenance cost of the existing equipment/machinery).

(ii) Possibility of combining some operations and resulting increase in productivity by challenger (new machine).

(iii) The initial cost of challenger.

(iv) Salvage value of existing equipment and challenger at the end of its useful life.

(v) Improvement in productivity and quality by use of challenger.

(vi) Saving in space by use of new machine.

(vii) Reduction in scrap and waste by use of new machine.

(viii) Down time cost of present machine.

(ix) Reduction in cost of jigs and fixtures by using challenger.

(x) Effect on consumption of power by replacing the existing machine by new machine.

(iii) Tangible Factors:

These factors involve sociological and humanitarian considerations with far reaching effects:

(i) Like replacing the existing machine which causes unpleasantness (may be noise and smoke pollution) and unsafe working conditions leading to accidents.

(ii) Replacement may cause displacement of workers.

At the time of replacement a well-designed replacement policy should be adopted, rather than considering only the factors pertaining to the particular equipment involved, should compare thoroughly all the existing equipment with its possible replacement.

For the purpose of sound economic comparison all factors should be converted into cost and possible increase in revenue. Break even analysis can be utilized for the purpose of taking replacement decision or selection of investment alternatives.

Problems in Replacement of Equipment:

The problem of equipment replacement is a routine phenomenon of industrial enterprises. Normally, it is experienced in systems where machines, individuals or the capital assets are the main job performing units. It is the common phenomena that performance or efficiency of an item in a system deteriorates with the passage of time.

The remedy is either to adopt maintenance measures to resort to the requisite level of performance or to replace the item with some new items. Thus it is required to formulate a most economic replacement policy which is in the best interest of the enterprise or system.

The various types of replacement problems can be expressed broadly in the following situations:

(i) Replacement of Equipment/Machine/Item which Deteriorate with Time:

This situation arises when the efficiency is measured as the discounted value of all future costs associated with each alternative. The simplest replacement model in such cases is one where the deterioration rate is predictable in terms of increasing maintenance costs and decreasing salvage value with time.

The maintenance cost of the machine/assets/equipment always increases with time and a stage comes when maintenance cost is so high that it is more economical to replace it by new one.

In such cases the decision may not be to re-palace the equipment if the next year maintenance cost is less than the average cost of the previous year and replace the equipment if the next year’s maintenance expenses is more than the average cost of the previous year.

There are two methods to find the appropriate solution in this case i.e.:

(i) Annual cost method.

(ii) Present worth method.

(ii) Replacements of Items that Fail Completely are Expensive to be Replaced:

In general it is a common characteristic that the probability of failure of any item in a system increases with the period of use or passage of time. A machine or equipment consisting of a number of parts/items may be considered as a system.

The system may be such that the whole system may result in breakdown with the failure of any item. This break down implies loss in production, idle labour, idle inventory and other units of the system.

It is possible that the nature of item which requires replacement may be such that immediate replacement is not available or possible. Thus there is necessity of formulating some appropriate replacement policy in such cases.

There are two possible solutions:

(a) Individual Replacement Policy:

Whenever any item fails, it should be immediately replaced.

(b) Group Replacement Policy:

All the items/parts are replaced after a certain period of time T inspite of these being in working condition, with a provision that if any item fails before this time T it can be replaced immediately. This approach decreases the probability of breakdown in the system. This approach is essential if the sudden breakdown of the equipment/machine is hazardous.

Such policy requires two fold considerations namely:

(i) The rate of individual replacement during the period.

(ii) The total cost incurred on individual and group replacements during the selected/chosen period of replacement.

The period for which the total cost is minimum is considered as optimal.

The following information is required to take decision in this procedure in such cases:

(A) Probability of failure at different periods of time

(B) Loss incurred due to these failures

(C) Cost of individual replacement and

(D) Cost of group replacement.

This model is represented by:

  • Increasing maintenance cost.
  • Decreasing salvage value.

Assumption

  • Increased age reduces efficiency

Generally, the criteria for measuring efficiency is the discounted value of all future costs associated with each policy.

Let
C = the capital cost of a certain item, say a machine
S(t) = the selling or scrap value of the item after t years.
F(t) = operating cost of the item at time t
n = optimal replacement period of the time

Now, the annual cost of the machine at time t is given by C – S(t) + F(t) and since the total maintenance cost incurred on the machine during n years is F(t) dt, the total cost T, incurred on the machine during n years is given by:

T = C – S(t) + F(t) dt

Thus, the average annual total cost incurred on the machine per year during n years is given by

TA = 1
—–
n
C – S(t) + F(t) dt

To determine the optimal period for replacing the machine, the above function is differentiated with respect to n and equated to zero.

dTA
——
dn
= -1
—–
n2
C – S(t) -1
—–
n2
F(t) dt + F(n)
——
n

 

Equating dTA
——
dn
= 0, we get

 

F(n) = 1
—–
n
C – S(t) + F(t) dt

That is, F(n) = TA

Thus, we conclude that an item should be replaced when the average cost to date becomes equal to the current maintenance cost.

Replacement and Time Value of Money

The time value of money is generally a principle within a financial theory which states that the people if given a preference, would like to receive the money as sooner as possible. It lays the foundation for many of the theories which mainly determines the discount rate that one should expect during the future cash flows.

Time Value of Money says that the worth of a unit of money is going to be changed in future. Put simply, the value of one rupee today will be decreased in future. The whole concept is about the present value and future value of money. There are two methods used for ascertaining the worth of money at different points of time, namely, compounding and discounting. Compounding method is used to know the future value of present money. Conversely, discounting is a way to compute the present value of future money.

This principle does not calculate the market value of that asset, instead, it calculates the intrinsic value. Intrinsic value is nothing but the total value that is required to own the asset for the entirety of its lifetime. For better understanding, let’s consider an example. Every business to operate requires machinery for its operations.

The machinery is the necessity for every business and thus whenever the machine or in business terms, assets reaches its life, it needs to replace. Now, the machine that we used to replace the current one reaches its life, also requires to be replaced at the end of that asset’s life. And so the process goes on and on till the business is being run.

This replacement of machines results into the outflow of the cash and thus the whole amount can be added to get a depreciation amount. This is called the depreciation value. There are many methods to calculate this depreciation of any machinery. These methods include the write-down value method, the straight-line method, etc.

Compounding

Discounting

Meaning

The method used to determine the future value of present investment is known as Compounding. The method used to determine the present value of future cash flows is known as Discounting.

Concept

If we invest some money today, what will be the amount we get at a future date. What should be the amount we need to invest today, to get a specific amount in future.

Use of

Compound interest rate. Discount rate

Known

Present Value Future Value

Factor

Future Value Factor or Compounding Factor Present Value Factor or Discounting Factor

Formula

FV = PV (1 + r)^n PV = FV / (1 + r)^n

Assumptions

While talking about the time value of money principle, there are many assumptions that you need to make so that the depreciation can be demonstrated in a simple manner. Some of these assumptions include:

  • Each machinery that will be replaced will have the same cost
  • The productivity of all the assets remains the same during its lifetime
  • The asset and its replacement have the same total lifetime
  • There will be no residual value of the asset at the end of its lifetime
  • The asset that is used for replacement will have the same productivity
  • The assets that are used does not require any repairs or maintenance during its lifetime
  • No taxation is involved in the transaction
  • The payment done for the asset which is replaced will be upfront
  • The replacement of the asset will be immediately done

Present Value and Future Value

While compounding value for the depreciation of the assets, you need to keep in mind two important values: present value and future value. Future value is the value of the asset after a certain time period. While the present value is the value of the asset that we calculate after deducting the residual value.

FV = PV(1 + r)n

where FV= future value,PV = present value, r = rate of interest, n = equal number of periods.

PV = FV / (1 + r)n

Compounding

For understanding the concept of compounding, first of all, you need to know about the term future value. The money you invest today, will grow and earn interest on it, after a certain period, which will automatically change its value in future. So the worth of the investment in future is known as its Future Value. Compounding refers to the process of earning interest on both the principal amount, as well as accrued interest by reinvesting the entire amount to generate more interest.

Compounding is the method used in finding out the future value of the present investment. The future value can be computed by applying the compound interest formula which is as under:

Where n = number of years
R = Rate of return on investment.

Discounting

Discounting is the process of converting the future amount into its Present Value. Now you may wonder what is the present value? The current value of the given future value is known as Present Value. The discounting technique helps to ascertain the present value of future cash flows by applying a discount rate. The following formula is used to know the present value of a future sum:

Where 1,2,3,…..n represents future years
FV = Cash flows generated in different years,
R = Discount Rate

Simulation Advantages, Limitations

Advantages of Simulation

Simulation offers various advantages. They have applications in every field. But this article covers the advantages of simulation in mechanical design only.

System Behavior Analysis

Simulations are used to analyze the behavior of a system without actually building it. Engineers can work on alternative solutions and simulate various designs. Therefore advantages of simulation include finalizing the best solution for prototyping before actually building it.

Reduces Manufacturing Cost

Simulation offers the advantage of designing products first time right. Therefore manufacturing and testing costs are reduced significantly.

Faster Products to Market

Advantages of Simulation studies include reduced number of design iterations. Therefore Design cycle time reduces significantly. This gives companies advantage of launching products in the market faster.

Design Analysis: Visual Output

Simulation results can be exported in graphical representation. It helps in analyzing system and product performance.

Problem Solving

Simulations are used to analyze a problem at various levels. This leads to faster and effective problem solving.

Value Engineering

Simulation studies helps in reducing manufacturing cost of existing and new products. For example mold-flow analysis is used to reduce injection molding cycle time.

Disadvantages of Simulation studies

Along with so many advantages, simulation studies also have some disadvantages. But the fact is overall simulation studies are good for a design.

Higher Initial Investment

Disadvantages of simulation software involve high initial investment. Such as high software cost and computing power requirements. Therefore small companies can not afford them.

Accurate Boundary conditions and input Data

Simulation results accuracy depends on input data and boundary conditions. System boundary conditions include environmental temperature, pressure and material.

Therefore boundary conditions need to be defined accurately to achieve good results. It requires a lot of experience.

Not 100% Accurate

Simulation softwares are not 100% accurate. You should expect some discrepancies in simulation results and tested products. With time engineers can refine their simulation results.

Main advantages of simulation include:

  • Study the behavior of a system without building it.
  • Results are accurate in general, compared to analytical model.
  • Help to find un-expected phenomenon, behavior of the system.
  • Easy to perform “What-If” analysis.

Main disadvantages of simulation include:

  • Expensive to build a simulation model.
  • Expensive to conduct simulation.
  • Sometimes it is difficult to interpret the simulation results.

Two Persons Zero Sum Games

The simplest model is a duopoly market in which each duopolist attempts to maximise his market share.

Given this goal, whatever a firm gains (by increasing its share of the market) the other firm loses (because of the decrease in its share).

Thus any gain of one rival is offset by the loss of the other, and the net gain sums up to zero. Hence the name ‘zero-sum game’.

The assumptions of the model are:

  1. The firms have a given, well-defined goal. In our particular example the goal is maximisation of the market share.
  2. Each firm knows the strategies open to it and to its rival, or concentrates on the most important of these strategies.
  3. Each firm knows with certainty the payoffs of all combinations of the strategies being considered. This implies that the firm knows its total revenue, total costs and total profit from each combination of strategies.
  4. The actions chosen by the duopolists do not affect the total size of the market.
  5. Each firm chooses its strategy ‘expecting the worst from its rival’, that is, each firm acts in the most conservative way, expecting that the rival will choose the best possible counter-strategy open to him. This behaviour is defined as ‘rational’.
  6. In the zero-sum game there is no incentive for collusion, given assumption 4, since the goals of the firms are diametrically opposed.

In order to find the equilibrium solution we need information on the payoff matrix of the two firms. In our example the payoffs will be shares of the market resulting from the adoption of any two strategies by the rivals. Assume that Firm I has four strategies open to it and Firm II has five strategies. The payoff matrices of the duopolists are shown in tables 19.2 and 19.3.

Clearly the sum of the payoffs in corresponding cells of the two payoff tables adds up to unity, since the numbers in these cells are shares, and the total market is shared between the two firms. In general, in the two-person zero-sum game we need not write both payoff matrices because of the nature of the game: the goals are opposing, and, in our example, the payoff table of Firm I contains indirectly information about the pay­off of Firm II. Still we start by showing both tables, and then we show how the equi­librium solution can be found from only the first payoff matrix.

Choice of strategy by Firm I:

Firm I examines the outcomes of each strategy open to it. That is, Firm I examines each row of its payoff matrix and finds the most favourable outcome of the corres­ponding strategy, because the firm expects the rival to adopt the most advantageous action open to him. This is the behavioural rule implied by assumption 5 of this model.

Thus:

If Firm I adopts strategy A1, the worst outcome that it may expect is a share of 0.10 (which will be realized if the rival Firm II adopts its most favourable strategy B1).

If Firm I adopts strategy A2, the worst outcome will be a share of 0.30 (if the rival adopts the best action for him, B2).

If Firm I adopts strategy A3, the worst outcome will be a share of 0.20 (if Firm II chooses the best open alternative, B3).

If Firm I adopts strategy A4, the worst outcome will be a share of 0.15 (which would be realised by action B2 of Firm II).

Among all these minima (that is, among the above worst outcomes) Firm I chooses the maximum, the ‘best of the worst’. This is called a maximin strategy, because the firm chooses the maximum among the minima. In our example the maximin strategy of Firm I is A2, that is, the strategy which yields a share of 0.30.

Choice of strategy by Firm II:

Firm II behaves in exactly the same way. The only difference is that Firm II examines the columns of its payoff table, because these columns include the results-payoffs of each of the strategies open to Firm II. For each strategy, that is, for each column, Firm II finds the worst outcome (on the assumption that the rival will choose the best), and among these worst outcomes Firm II chooses the best. Thus, if Firm II uses its own payoff table, its behaviour is a maximin behaviour identical to the behaviour of Firm I.

However, in the zero-sum game only one payoff matrix is adequate for the equilibrium solution. In our example the first payoff table will be used not only by Firm I but also by Firm II. Thus concentrating on the first payoff table we may re­state the decision-making process of Firm II as follows. Firm II examines the columns of the (first) payoff matrix because these columns contain the information about the payoffs of its strategies.

For each column-strategy Firm II finds the maximum payoff (of Firm I) because this is the worst situation the firm (II) will face if it adopts the strategy corresponding to that column. Thus for strategy B{ the worst outcome (for Firm II) is 0-40; for strategy B2 the worst outcome is 0-30; for strategy B3 the worst outcome is 0-50; for strategy fl4 the worst result is 0-60; for strategy Bs the worst result is 0-50. Among these maxima of each column-strategy Firm II will choose the strategy with minimum value. Thus the strategy of Firm II is a minimax strategy, since it involves the choice of a minimum among the maxima payoffs. (Table 19.4.)

It should be stressed that although different terms are used for the choice of the two firms (maximin behaviour of Firm I, minimax behaviour of Firm II), the behavioural rule for both firms is the same: each firm expects the worst from its rival.

In our example the equilibrium solution is strategy A2 for Firm I and B2 for Firm II. This solution yields shares 0 30 for Firm I and 0-70 for Firm II. It is an equilibrium solution because it is the preferred one by both firms. This solution is called the ‘saddle point’, and the preferred strategies A2 and B2 are called ‘dominant strategies’.

It should be clear that there exists no such equilibrium (saddle) solution if there is no payoff which is preferred by both firms simultaneously. Under certain mathematical conditions other solutions and strategy choices can be determined. The analysis of the resulting mixed strategies requires a sophisticated exposition of utility theory and random selection which is beyond the scope of this book.

  1. Uncertainty Model:

The assumption that each firm knows with certainty the exact value of the payoff of each strategy is unrealistic. The most probable situation in the real business world is that the firm, by adopting a certain strategy, may expect a range of results for each counter-strategy of the rival, each result with an associated probability. Thus the payoff matrix is constructed so as to include the expected value of each payoff.

The expected value is the sum of the products of the possible outcomes of a pair of strategies (adopted by the two firms) each multiplied by its probability:

where gsi = the sth of the n possible outcomes of strategy i of Firm I (given that Firm II has chosen strategy j)

PS = the probability of the sth outcome of strategy i

For example, assume that Firm I chooses strategy A1 and Firm II reacts with strategy B1. This pair of simultaneous strategies may yield the shares for Firm I each with a certain probability, shown in the second column of table 19.5. Thus the expected payoff of the pair of strategies A1 and B1 is

E(G1 1) = (0.00)(0.00) + (0.05) (0.05) + (0.15)(0.05) + … + (0.95)(0.02) + (1)(0) = 0.458

In a similar way we find the expected payoff of all combinations of strategies. Given the matrix of expected payoffs, the behavioural pattern of the firms is the same as in the certainty model.

That is:

Firm I adopts the maximin strategy. It finds for each row the minimum expected payoff, and among these minima the firm chooses the one with the highest value (the maximum among the minima).

Firm II adopts the minimax strategy. It finds for each column the maximum expected payoff, and among these maxima Firm II chooses the one with the smallest value (the minimum among the maxima).

Although the uncertainty zero-sum game seems simple, its assumptions are quite stringent:

  1. The firms maximise their expected payoffs.
  2. The zero-sum game assumes that both firms assign the same probability to each pair of payoffs; they make the same judgement. This implies that the firms must have the same information and the same objective criteria with which to evaluate the probabilities of the different payoffs. Otherwise the probability distribution of the payoffs will not be objective.
  3. The firms maximise their total utility, and the utility of each payoff is proportional to the value assumed by the payoff.

The above assumptions are clearly strong and unrealistic. Furthermore, the basic condition of the zero-sum game that the ‘gain’ of one firm is equal to the ‘loss’ of the other, is rarely met in the real business world. Usually the ‘gains’ are not ‘offset’ by equal ‘losses’. Only in the case of a share goal, and in the rare case of extinction tactics, do we have a zero-sum game. In most cases we have a non-zero-sum game.

Pure and Mixed Strategy

Pure strategy

A pure strategy is an unconditional, defined choice that a person makes in a situation or game. For example, in the game of Rock-Paper-Scissors,if a player would choose to only play scissors for each and every independent trial, regardless of the other player’s strategy, choosing scissors would be the player’s pure strategy. The probability for choosing scissors equal to 1 and all other options (paper and rock) is chosen with the probability of 0. The set of all options (i.e. rock, paper, and scissors) available in this game is known as the strategy set.

Mixed strategy

A mixed strategy is an assignment of probability to all choices in the strategy set. Using the example of Rock-Paper-Scissors, if a person’s probability of employing each pure strategy is equal, then the probability distribution of the strategy set would be 1/3 for each option, or approximately 33%. In other words, a person using a mixed strategy incorporates more than one pure strategy into a game.

The definition of a mixed strategy does not rule out the possibility for an option(s)to never be chosen (eg. pscissors= 0.5, prock = 0.5, ppaper = 0). This means that in a way, a pure strategy can also be considered a mixed strategy at its extreme, with a binary probability assignment (setting one option to 1 and all others equal to 0). For this article, we shall say that pure strategies are not mixed strategies.

In the game of tennis, each point is a zero-sum game with two players (one being the server S, and the other being the returner R). In this scenario, assume each player has two strategies (forehand F, and backhand B). Observe the following hypothetical in the payoff matrix:

The strategies FS or BS are observed for the server when the ball is served to the side of the service box closest to the returner’s forehand or backhand, respectively. For the returner, the strategies FR and BR are observed when the returner moves to the forehand or backhand side to return the serve, respectively. This gives us the payoffs when the returner receives the serve correctly (FS,FR or BS,BR), or incorrectly (FS,BR or BS,FR). The payoffs to each player for every action are given in pure strategy payoffs, as each player is only guaranteed their payoff given the opponent’s strategy is employed 100% of the time. Given these pure strategy payoffs, we can calculate the mixed strategy payoffs by figuring out the probability each strategy is chosen by each player.

So you are Roger. It is apparent to you that a pure strategy would be exploitable. If you serve to the backhand 100% of the time, it would be easy for the opponent to catch on and return from the backhand side more often than the forehand, maximizing his expected payoff. Same goes for the serve to the forehand. But how often should you mix your strategy and serve to each side to minimize your opponent’s chances of winning? Calculating these probabilities would give us our mixed strategy Nash equilibria, or the probabilities that each strategy is used which would minimize the opponent’s expected payoff. In the following article, we will look at how to find mixed strategy Nash equilibria, and how to interpret them.

Pure and Mixed Strategies:

In a pure strategy, players adopt a strategy that provides the best payoffs. In other words, a pure strategy is the one that provides maximum profit or the best outcome to players. Therefore, it is regarded as the best strategy for every player of the game. In the previously cited example (Table-1), the increase in the prices of organizations’ products is the best strategy for both of them.

This is because if both of them increase the prices of their products, they would earn maximum profits. However, if only one of the organization increases the prices of its products, then it would incur losses. In such a case, an increase in prices is regarded as a pure strategy for organizations ABC and XYZ.

On the other hand, in a mixed strategy, players adopt different strategies to get the possible outcome. For example, in cricket a bowler cannot throw the same type of ball every time because it makes the batsman aware about the type of ball. In such a case, the batsman may make more runs.

However, if the bowler throws the ball differently every time, then it may make the batsman puzzled about the type of ball, he would be getting the next time.

Therefore, strategies adopted by the bowler and the batsman would be mixed strategies, which are shown ion Table2:

In Table-2, when the batsman’s expectation and the bowler’s ball type are same, then the percentage of making runs by batsman would be 30%. However, when the expectation of the batsman is different from the type of ball he gets, the percentage of making runs would reduce to 10%. In case, the bowler or the batsman uses a pure strategy, then any one of them may suffer a loss.

Therefore, it is preferred that bowler or batsman should adopt a mixed strategy in this case. For example, the bowler throws a spin ball and fastball with a 50-50 combination and the batsman predicts the 50-50 combination of the spin and fast ball. In such a case, the average hit of runs by batsman would be equal to 20%.

This is because all the four payoffs become 25% and the average of four combinations can be derived as follows:

0.25(30%) + 0.25(10%) + 0.25(30%) + 0.25(10%) = 20%

However, it may be possible that when the bowler is throwing a 50-50 combination of spin ball and fastball, the batsman may not be able to predict the right type of ball every time. This would decrease his average run rate below 20%. Similarly, if the bowler throws the ball with a 60-40 combination of fast and spin ball respectively, and the batsman would expect either a fastball or a spin ball randomly. In such a case, the average of the batsman hits remains 20%.

The probabilities of four outcomes now become:

Anticipated fastball and fastball thrown: 0.50*0.60 = 0.30

Anticipated fastball and spin ball thrown: 0.50*0.40 = 0.20

Anticipated spin ball and spin ball thrown: 0.50*0.60 = 0.30

Anticipated spin ball and fastball thrown: 0.50*0.40 = 0.20

When we multiply the probabilities with the payoffs given in Table-2, we get

0.30(30%) + 0.20(10%) + 0.20(30%) + 0.30(10%) = 20%

This shows that the outcome does not depends on the combination of fastball and spin ball, but it depends on the prediction of the batsman that he can get any type of ball from the bowler.

Queuing Theory

Queuing theory is the study of queues and the random processes that characterize themIt deals with making mathematical sense of real-life scenarios. For example, a mob of people queuing up at a bank or the tasks queuing up on your computer’s back end.

In queuing theory we often want to find out how long wait times or queue lengths are, and we can use models to do this. These models are typically important in business and software applications, and queueing theory is often considered a part of operations research.

About Queuing

Any queuing activity can be summarized as entities (customers in your supermarket queue, or jobs in a computer queue) trying to get through an activity (waiting to be served). Queues happen when we can’t all access the activity at the same time: when it is not economically efficient to have enough checkout lines for everyone to go right through as soon as they were ready, or there isn’t enough server space to do an unlimited amount of computer tasks at one moment.

In queueing theory a queue does not refer simply to a neat row which is always first come, first served. This is one example of a queue, but not the only kind. A mob trying to rush for the door on Black Friday is considered a queue as well, as is a group of job applicants waiting for interviews who are picked randomly, one by one, to be interviewed.

Types of Queues and Types of Service

First In First Out, or First Come First Served, is fairly common in banking and commerce. It is the type of queue you get when you have people politely lined up, waiting for their turn.

Last In First Out is the opposite scheme; whoever has been waiting for the shortest time is served first. This type of queue management is common in asset management, where assets produced or acquired last are the ones used or disposed of first. For example: the most recent employees are often the ones laid off first.

Priority is where customers are served based on their priority level; these levels could be based on status, task urgency, or some other criteria.

Shortest Job First is when whoever needs the shortest amount of service gets taken care of first

Processor Sharing is when everyone gets served, or half-served, at the same time; service capacity is distributed evenly among everyone waiting.

There may be a single server, where a line of people or items must go through a single bottleneck, or parallel servers, where the same line is served by several servers.  Or there may be a tandem queue, where each of multiple servers has their own queue or line.

Balking when a customer decides not to wait for service because the wait time threatens to be too long. Renegingis similar, but when a customer who has waited already decides to leave because they’ve wasted too much time. Jockeying is when a customer switches between queues in a tandem queue system, trying to orchestrate the shortest wait possible.

Standard Notation for Queueing Theory

To make life easier, there’s standard notation for queueing theory that is used across the board. These standard symbols include

λ: the mean arrival rate.

μ: the mean service rate.

n: the number of people in the system.

A: the arrival process probability distribution.

B: the service process probability distribution.

C: the number of servers.

D: the maximum number of customers allowed in the system at any given time, waiting or being served (without getting bumped).

E: the maximum number of customers total.

Queuing System Components

Input Source: The input source generates customers for the service mechanism. The most important characteristic of the input source is its size. It may be either finite or infinite. Please note that the calculations are far easier for the infinite case, therefore, this assumption is often made even when the actual size is relatively large.

If the population size is finite, then the analysis of queuing model becomes more involved. The statistical pattern by which calling units are generated over time must also be specified. It may be Poisson or Exponential probability distribution.

  1. Queue: It is characterized by the maximum permissible number of units that it can contain. Queues may be infinite or finite.
  2. Service Discipline:It refers to the order in which members of the queue are selected for service. Frequently, the discipline is first come, first served.

Following are some other disciplines:

  • LIFO (Last In First Out)
  • SIRO (Service In Random Order)
  • Priority System

Service Mechanism:

A specification of the service mechanism includes a description of time to complete a service and the number of customers who are satisfied at each service event. The service mechanism also prescribes the number and configuration of servers. If there is more than one service facility, the calling unit may receive service from a sequence of these. At a given facility, the unit enters one of the parallel service channels and is completely serviced by that server. Most elementary models assume one service facility with either one or a finite number of servers.The following figure shows the physical layout of service facilities.

Unusual Customer/Server Behaviour

Customer’s Behaviour

  • A customer may not like to join the queue due to long waiting line.
  • A customer may leave the queue after waiting for sometime due to impatience.

Collusion. Several customers may cooperate and only one of them may stand in the queue.

Jockeying. When there are a number of queues, a customer may move from one queue to another in hope of receiving the service quickly.

Server’s Behaviour

Failure. The service may be interrupted due to failure of a server (machinery).

Changing service rates. A server may speed up or slow down, depending on the number of customers in the queue. For example, when the queue is long, a server may speed up in response to the pressure. On the contrary, it may slow down if the queue is very small.

Batch processing. A server may service several customers simultaneously, a phenomenon known as batch processing.

Assumptions of Queuing Theory

  • The source population has infinite size.
  • The inter-arrival time has an exponential probability distribution with a mean arrival rate of l customer arrivals per unit time.
  • There is no unusual customer behaviour.
  • The service discipline is FIFO.
  • The service time has an exponential probability distribution with a mean service rate of m service completions per unit time.
  • The mean arrival rate is less than the mean service rate, i.e., l < m.
  • There is no unusual server behaviour.

Application of Queuing Theory in Business Decision Making

Queues happen when resources are limited. In fact, queues make economic sense; no queues would equate to costly overcapacity. Queuing theory helps in the design of balanced systems that serve customers quickly and efficiently but do not cost too much to be sustainable. All queuing systems are broken down into the entities queuing for an activity.

At its most elementary level, queuing theory involves the analysis of arrivals at a facility, such as a bank or fast food restaurant, then the service requirements of that facility, e.g., tellers or attendants.

The origin of queuing theory can be traced back to the early 1900s, found in a study of the Copenhagen telephone exchange by Agner Krarup Erlang, a Danish engineer, statistician and, mathematician. His work led to the Erlang theory of efficient networks and the field of telephone network analysis.

  • Queuing theory is the study of congestion and waiting in line.
  • The theory can help with creating an efficient and cost-effective workflow, allowing the user to improve traffic flow.
  • Queuing theory assesses two key aspects customer arrival at the facility and service requirements.
  • Often used as an operations management tool, queuing theory can address staffing, scheduling and customer service shortfalls

Transport Problems

The North-West Corner Rule is a method adopted to compute the initial feasible solution of the transportation problem. The name North-west corner is given to this method because the basic variables are selected from the extreme left corner.

The concept of North-West Corner can be well understood through a transportation problem given below:

In the table, three sources A, B and C with the production capacity of 50 units, 40 units, 60 units of product respectively is given. Every day the demand of three retailers D, E, F is to be furnished with at least 20 units, 95 units and 35 units of product respectively. The transportation costs are also given in the matrix.

The prerequisite condition for solving the transportation problem is that demand should be equal to the supply. In case the demand is more than supply, then dummy origin is added to the table. The supply of dummy origin will be equal to the difference between the total supply and total demand. The cost associated with the dummy origin will be zero.

Similarly, in case the supply is more than the demand, then dummy source is created whose demand will be equivalent to the difference between supply and demand. Again the cost associated with the dummy source will be zero.

Once the demand and supply are equal, the following procedure is followed:

  1. Select the north-west or extreme left corner of the matrix, assign as many units as possible to cell AD, within the supply and demand constraints. Such as 20 units are assigned to the first cell, that satisfies the demand of destination D while the supply is in surplus.
  2. Now move horizontally and assign 30 units to the cell AE. Since 30 units are available with the source A, the supply gets fully saturated.
  3. Now move vertically in the matrix and assign 40 units to Cell BE. The supply of source B also gets fully saturated.
  4. Again move vertically, and assign 25 units to cell CE, the demand of destination E is fulfilled.
  5. Move horizontally in the matrix and assign 35 units to cell CF, both the demand and supply of origin and destination gets saturated. Now the total cost can be computed.

The Total cost can be computed by multiplying the units assigned to each cell with the concerned transportation cost. Therefore,

Total Cost = 20*5+ 30*8+ 40*6+ 25*9+ 35*6 = Rs 1015

General Structure of Transportation Problem

The Transportation Method of linear programming is applied to the problems related to the study of the efficient transportation routes i.e. how efficiently the product from different sources of production is transported to the different destinations, such as the total transportation cost is minimum.

Here origin means the place where the product is originated or manufactured for the ultimate sales while the places where the product is required to be sold is called destination. For solving the transportation problem, the following steps are to be systematically followed:

  1. Obtaining the initial feasible solution, which means identifying the solution that satisfies the requirements of demand and supply. There are several methods through which the initial feasible solution can be obtained; these are:
  • North-West Corner
  • Least Cost Method
  • Vogel’s Approximation Method

Note: It is to be ensured that the number of cells occupied should be equal to m+n-1, where “m” is the number of rows while “n” is the number of columns.

  1. Testing the optimality of the initial feasible solution. Once the feasible solution is obtained, the next step is to check whether it is optimum or not. There are two methods used for testing the optimality:
  • Stepping-stone Method
  • Modified Distribution Method (MODI)

The final step is to revise the solution until the optimum solution is obtained.

The two most common objectives of transportation problem could be:

i)maximize the profit of transporting “n” units of product to the destination “y”

ii) Minimize the cost of shipping “n” units of product to the destination “y”.

Vogel’s Approximation Method or VAM

The Vogel’s Approximation Method or VAM is an iterative procedure calculated to find out the initial feasible solution of the transportation problem. Like Least cost Method, here also the shipping cost is taken into consideration, but in a relative sense.

The following is the flow chart showing the steps involved in solving the transportation problem using the Vogel’s Approximation Method:

The concept of Vogel’s Approximation Method can be well understood through an illustration given below:

  1. First of all the difference between two least cost cells are calculated for each row and column, which can be seen in the iteration given for each row and column. Then the largest difference is selected, which is 4 in this case. So, allocate 20 units to cell BD, since the minimum cost is to be chosen for the allocation. Now, only 20 units are left with the source B.
  2. Column D is deleted, again the difference betweenthe least cost cells is calculated for each row and column, as seen in the iteration below. The largest difference value comes to be 3, so allocate 35 units to cell AF and 15 units to the cell AE. With this, the Supply and demand of source A and origin F gets saturated, so delete both the row A and Column F.
  3. Now, single column E is left, since no difference can be found out, so allocate 60 units to the cell CE and 20 units to cell BE, as only 20 units are left with source B. Hence the demand and supply are completely met.

Now the total cost can be computed, by multiplying the units assigned to each cell with the cost concerned. Therefore,

Total Cost = 20*3 + 35*1 + 15*4 + 60*4 + 20*8 = Rs 555

Note: Vogel’s Approximation Method is also called as Penalty Method because the difference costs chosen are nothing but the penalties of not choosing the least cost routes.

error: Content is protected !!