Business Mathematics & Statistics Bangalore University B.com 3rd Semester NEP Notes

Unit 1 Commercial Arithmetic [Book]
Percentage VIEW
Cost, Profit and Selling price VIEW
Ratio Proportion VIEW
Problems on Speed and Time VIEW
Interest-Simple interest and Compound interest VIEW
Annuity VIEW

 

Unit 2 Theory of Equations [Book] No Update

 

Unit 3 Matrices and Determinants [Book] No Update

 

Unit 4 Measures of Central Tendency and Dispersion [Book]
Introduction Meaning and Definition, Objectives of measures of Central tendency VIEW
Types of averages: Arithmetic mean (Simple average only) VIEW
Median VIEW
Mode VIEW
Meaning and Objectives of measures of Dispersion VIEW
VIEW VIEW
Standard deviation and coefficient of Variation VIEW
Skewness VIEW VIEW
Problems on Direct method only VIEW

 

Unit 5 Correlation and Regression [Book]
Correlation: Meaning and definition-uses VIEW VIEW
VIEW
Karl Pearson’s coefficient of correlation (deviation from actual mean only) VIEW
Spearman’s Rank Correlation Coefficient VIEW
Regression Meaning VIEW
Regression Equations, Estimating x and y values VIEW
Finding correlation coefficient with Regression coefficient VIEW VIEW

Normal Distribution: Importance, Central Limit Theorem

Normal distribution, or the Gaussian distribution, is a fundamental probability distribution that describes how data values are distributed symmetrically around a mean. Its graph forms a bell-shaped curve, with most data points clustering near the mean and fewer occurring as they deviate further. The curve is defined by two parameters: the mean (μ) and the standard deviation (σ), which determine its center and spread. Normal distribution is widely used in statistics, natural sciences, and social sciences for analysis and inference.

The general form of its probability density function is:

The parameter μ is the mean or expectation of the distribution (and also its median and mode), while the parameter σ is its standard deviation. The variance of the distribution is σ^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable whose distribution converges to a normal distribution as the number of samples increases. Therefore, physical quantities that are expected to be the sum of many independent processes, such as measurement errors, often have distributions that are nearly normal.

A normal distribution is sometimes informally called a bell curve. However, many other distributions are bell-shaped (such as the Cauchy, Student’s t, and logistic distributions).

Importance of Normal Distribution:

  1. Foundation of Statistical Inference

The normal distribution is central to statistical inference. Many parametric tests, such as t-tests and ANOVA, are based on the assumption that the data follows a normal distribution. This simplifies hypothesis testing, confidence interval estimation, and other analytical procedures.

  1. Real-Life Data Approximation

Many natural phenomena and datasets, such as heights, weights, IQ scores, and measurement errors, tend to follow a normal distribution. This makes it a practical and realistic model for analyzing real-world data, simplifying interpretation and analysis.

  1. Basis for Central Limit Theorem (CLT)

The normal distribution is critical in understanding the Central Limit Theorem, which states that the sampling distribution of the sample mean approaches a normal distribution as the sample size increases, regardless of the population’s actual distribution. This enables statisticians to make predictions and draw conclusions from sample data.

  1. Application in Quality Control

In industries, normal distribution is widely used in quality control and process optimization. Control charts and Six Sigma methodologies assume normality to monitor processes and identify deviations or defects effectively.

  1. Probability Calculations

The normal distribution allows for the easy calculation of probabilities for different scenarios. Its standardized form, the z-score, simplifies these calculations, making it easier to determine how data points relate to the overall distribution.

  1. Modeling Financial and Economic Data

In finance and economics, normal distribution is used to model returns, risks, and forecasts. Although real-world data often exhibit deviations, normal distribution serves as a baseline for constructing more complex models.

Central limit theorem

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution (informally a bell curve) even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1810, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

Characteristics Fitting a Normal Distribution

Poisson Distribution: Importance Conditions Constants, Fitting of Poisson Distribution

Poisson distribution is a probability distribution used to model the number of events occurring within a fixed interval of time, space, or other dimensions, given that these events occur independently and at a constant average rate.

Importance

  1. Modeling Rare Events: Used to model the probability of rare events, such as accidents, machine failures, or phone call arrivals.
  2. Applications in Various Fields: Applicable in business, biology, telecommunications, and reliability engineering.
  3. Simplifies Complex Processes: Helps analyze situations with numerous trials and low probability of success per trial.
  4. Foundation for Queuing Theory: Forms the basis for queuing models used in service and manufacturing industries.
  5. Approximation of Binomial Distribution: When the number of trials is large, and the probability of success is small, Poisson distribution approximates the binomial distribution.

Conditions for Poisson Distribution

  1. Independence: Events must occur independently of each other.
  2. Constant Rate: The average rate (λ) of occurrence is constant over time or space.
  3. Non-Simultaneous Events: Two events cannot occur simultaneously within the defined interval.
  4. Fixed Interval: The observation is within a fixed time, space, or other defined intervals.

Constants

  1. Mean (λ): Represents the expected number of events in the interval.
  2. Variance (λ): Equal to the mean, reflecting the distribution’s spread.
  3. Skewness: The distribution is skewed to the right when λ is small and becomes symmetric as λ increases.
  4. Probability Mass Function (PMF): P(X = k) = [e^−λ*λ^k] / k!, Where is the number of occurrences, is the base of the natural logarithm, and λ is the mean.

Fitting of Poisson Distribution

When a Poisson distribution is to be fitted to an observed data the following procedure is adopted:

Binomial Distribution: Importance Conditions, Constants

The binomial distribution is a probability distribution that summarizes the likelihood that a value will take one of two independent values under a given set of parameters or assumptions. The underlying assumptions of the binomial distribution are that there is only one outcome for each trial, that each trial has the same probability of success, and that each trial is mutually exclusive, or independent of each other.

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes, no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability q = 1 − p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used

The binomial distribution is a common discrete distribution used in statistics, as opposed to a continuous distribution, such as the normal distribution. This is because the binomial distribution only counts two states, typically represented as 1 (for a success) or 0 (for a failure) given a number of trials in the data. The binomial distribution, therefore, represents the probability for x successes in n trials, given a success probability p for each trial.

Binomial distribution summarizes the number of trials, or observations when each trial has the same probability of attaining one particular value. The binomial distribution determines the probability of observing a specified number of successful outcomes in a specified number of trials.

The binomial distribution is often used in social science statistics as a building block for models for dichotomous outcome variables, like whether a Republican or Democrat will win an upcoming election or whether an individual will die within a specified period of time, etc.

Importance

For example, adults with allergies might report relief with medication or not, children with a bacterial infection might respond to antibiotic therapy or not, adults who suffer a myocardial infarction might survive the heart attack or not, a medical device such as a coronary stent might be successfully implanted or not. These are just a few examples of applications or processes in which the outcome of interest has two possible values (i.e., it is dichotomous). The two outcomes are often labeled “success” and “failure” with success indicating the presence of the outcome of interest. Note, however, that for many medical and public health questions the outcome or event of interest is the occurrence of disease, which is obviously not really a success. Nevertheless, this terminology is typically used when discussing the binomial distribution model. As a result, whenever using the binomial distribution, we must clearly specify which outcome is the “success” and which is the “failure”.

The binomial distribution model allows us to compute the probability of observing a specified number of “successes” when the process is repeated a specific number of times (e.g., in a set of patients) and the outcome for a given patient is either a success or a failure. We must first introduce some notation which is necessary for the binomial distribution model.

First, we let “n” denote the number of observations or the number of times the process is repeated, and “x” denotes the number of “successes” or events of interest occurring during “n” observations. The probability of “success” or occurrence of the outcome of interest is indicated by “p”.

The binomial equation also uses factorials. In mathematics, the factorial of a non-negative integer k is denoted by k!, which is the product of all positive integers less than or equal to k. For example,

  • 4! = 4 x 3 x 2 x 1 = 24,
  • 2! = 2 x 1 = 2,
  • 1!=1.
  • There is one special case, 0! = 1.

Conditions

  • The number of observations n is fixed.
  • Each observation is independent.
  • Each observation represents one of two outcomes (“success” or “failure”).
  • The probability of “success” p is the same for each outcome

Constants

Fitting of Binomial Distribution

Fitting of probability distribution to a series of observed data helps to predict the probability or to forecast the frequency of occurrence of the required variable in a certain desired interval.

To fit any theoretical distribution, one should know its parameters and probability distribution. Parameters of Binomial distribution are n and p. Once p and n are known, binomial probabilities for different random events and the corresponding expected frequencies can be computed. From the given data we can get n by inspection. For binomial distribution, we know that mean is equal to np hence we can estimate p as = mean/n. Thus, with these n and p one can fit the binomial distribution.

There are many probability distributions of which some can be fitted more closely to the observed frequency of the data than others, depending on the characteristics of the variables. Therefore, one needs to select a distribution that suits the data well.

Important Terminologies: Variable, Quantitative Variable, Qualitative Variable, Discrete Variable, Continuous Variable, Dependent Variable, Independent Variable, Frequency, Class Interval, Tally Bar

Important Terminologies:

  • Variable:

Variable is any characteristic, number, or quantity that can be measured or quantified. It can take on different values, which may vary across individuals, objects, or conditions, and is essential in data analysis for observing relationships and patterns.

  • Quantitative Variable:

Quantitative variable is a variable that is measured in numerical terms, such as age, weight, or income. It represents quantities and can be used for mathematical operations, making it suitable for statistical analysis.

  • Qualitative Variable:

Qualitative variable represents categories or attributes, rather than numerical values. Examples include gender, color, or occupation. These variables are non-numeric and are often used in classification and descriptive analysis.

  • Discrete Variable:

Discrete variable is a type of quantitative variable that takes distinct, separate values. These values are countable and cannot take on intermediate values. For example, the number of children in a family is a discrete variable.

  • Continuous Variable:

Continuous variable is a quantitative variable that can take an infinite number of values within a given range. These variables can have decimals or fractions. Examples include height, temperature, or time.

  • Dependent Variable:

Dependent variable is the outcome or response variable that is being measured in an experiment or study. Its value depends on the changes in one or more independent variables. It is the variable of interest in hypothesis testing.

  • Independent Variable:

An independent variable is the variable that is manipulated or controlled in an experiment. It is used to observe its effect on the dependent variable. For example, in a study on plant growth, the amount of water given would be the independent variable.

  • Frequency:

Frequency refers to the number of times a particular value or category occurs in a dataset. It is used in statistical analysis to summarize the distribution of data points within various categories or intervals.

  • Class Interval:

A class interval is a range of values within which data points fall in grouped data. It is commonly used in frequency distributions to organize data into specific ranges, such as “0-10,” “11-20,” etc.

  • Tally Bar:

A tally bar is a method of recording data frequency by using vertical lines. Every group of five tallies (four vertical lines and a fifth diagonal line) represents five occurrences, helping to visually track counts in surveys or experiments.

Important Terminologies in Statistics: Data, Raw Data, Primary Data, Secondary Data, Population, Census, Survey, Sample Survey, Sampling, Parameter, Unit, Variable, Attribute, Frequency, Seriation, Individual, Discrete and Continuous

Statistics is the branch of mathematics that involves the collection, analysis, interpretation, presentation, and organization of data. It helps in drawing conclusions and making decisions based on data patterns, trends, and relationships. Statistics uses various methods such as probability theory, sampling, and hypothesis testing to summarize data and make predictions. It is widely applied across fields like economics, medicine, social sciences, business, and engineering to inform decisions and solve real-world problems.

1. Data

Data is information collected for analysis, interpretation, and decision-making. It can be qualitative (descriptive, such as color or opinions) or quantitative (numerical, such as age or income). Data serves as the foundation for statistical studies, enabling insights into patterns, trends, and relationships.

2. Raw Data

Raw data refers to unprocessed or unorganized information collected from observations or experiments. It is the initial form of data, often messy and requiring cleaning or sorting for meaningful analysis. Examples include survey responses or experimental results.

3. Primary Data

Primary data is original information collected directly by a researcher for a specific purpose. It is firsthand and authentic, obtained through methods like surveys, experiments, or interviews. Primary data ensures accuracy and relevance to the study but can be time-consuming to collect.

4. Secondary Data

Secondary data is pre-collected information used by researchers for analysis. It includes published reports, government statistics, and historical data. Secondary data saves time and resources but may lack relevance or accuracy for specific studies compared to primary data.

5. Population

A population is the entire group of individuals, items, or events that share a common characteristic and are the subject of a study. It includes every possible observation or unit, such as all students in a school or citizens in a country.

6. Census

A census involves collecting data from every individual or unit in a population. It provides comprehensive and accurate information but requires significant resources and time. Examples include national population censuses conducted by governments.

7. Survey

A survey gathers information from respondents using structured tools like questionnaires or interviews. It helps collect opinions, behaviors, or characteristics. Surveys are versatile and widely used in research, marketing, and public policy analysis.

8. Sample Survey

A sample survey collects data from a representative subset of the population. It saves time and costs while providing insights that can generalize to the entire population, provided the sampling method is unbiased and rigorous.

9. Sampling

Sampling is the process of selecting a portion of the population for study. It ensures efficiency and feasibility in data collection. Sampling methods include random, stratified, and cluster sampling, each suited to different study designs.

10. Parameter

A parameter is a measurable characteristic that describes a population, such as the mean, median, or standard deviation. Unlike a statistic, which pertains to a sample, a parameter is specific to the entire population.

11. Unit

A unit is an individual entity in a population or sample being studied. It can represent a person, object, transaction, or observation. Each unit contributes to the dataset, forming the basis for analysis.

12. Variable

A variable is a characteristic or property that can change among individuals or items. It can be quantitative (e.g., age, weight) or qualitative (e.g., color, gender). Variables are the focus of statistical analysis to study relationships and trends.

13. Attribute

An attribute is a qualitative feature that describes a characteristic of a unit. Attributes are non-measurable but observable, such as eye color, marital status, or type of vehicle.

14. Frequency

Frequency represents how often a specific value or category appears in a dataset. It is key in descriptive statistics, helping to summarize and visualize data patterns through tables, histograms, or frequency distributions.

15. Seriation

Seriation is the arrangement of data in sequential or logical order, such as ascending or descending by size, date, or importance. It aids in identifying patterns and organizing datasets for analysis.

16. Individual

An individual is a single member or unit of the population or sample being analyzed. It is the smallest element for data collection and analysis, such as a person in a demographic study or a product in a sales dataset.

17. Discrete Variable

A discrete variable takes specific, separate values, often integers. It is countable and cannot assume fractional values, such as the number of employees in a company or defective items in a batch.

18. Continuous Variable

A continuous variable can take any value within a range and represents measurable quantities. Examples include temperature, height, and time. Continuous variables are essential for analyzing trends and relationships in datasets.

Perquisites of Good Classification of Data

Good classification of data is essential for organizing, analyzing, and interpreting the data effectively. Proper classification helps in understanding the structure and relationships within the data, enabling informed decision-making.

1. Clear Objective

Good classification should have a clear objective, ensuring that the classification scheme serves a specific purpose. It should be aligned with the goal of the study, whether it’s identifying trends, comparing categories, or finding patterns in the data. This helps in determining which variables or categories should be included and how they should be grouped.

2. Homogeneity within Classes

Each class or category within the classification should contain items or data points that are similar to each other. This homogeneity within the classes allows for better analysis and comparison. For example, when classifying people by age, individuals within a particular age group should share certain characteristics related to that age range, ensuring that each class is internally consistent.

3. Heterogeneity between Classes

While homogeneity is crucial within classes, there should be noticeable differences between the various classes. A good classification scheme should maximize the differences between categories, ensuring that each group represents a distinct set of data. This helps in making meaningful distinctions and drawing useful comparisons between groups.

4. Exhaustiveness

Good classification system must be exhaustive, meaning that it should cover all possible data points in the dataset. There should be no omission, and every item must fit into one and only one class. Exhaustiveness ensures that the classification scheme provides a complete understanding of the dataset without leaving any data unclassified.

5. Mutually Exclusive

Classes should be mutually exclusive, meaning that each data point can belong to only one class. This avoids ambiguity and ensures clarity in analysis. For example, if individuals are classified by age group, someone who is 25 years old should only belong to one age class (such as 20-30 years), preventing overlap and confusion.

6. Simplicity

Good classification should be simple and easy to understand. The classification categories should be well-defined and not overly complicated. Simplicity ensures that the classification scheme is accessible and can be easily used for analysis by various stakeholders, from researchers to policymakers. Overly complex classification schemes may lead to confusion and errors.

7. Flexibility

Good classification system should be flexible enough to accommodate new data or changing circumstances. As new categories or data points emerge, the classification scheme should be adaptable without requiring a complete overhaul. Flexibility allows the classification to remain relevant and useful over time, particularly in dynamic fields like business or technology.

8. Consistency

Consistency in classification is essential for maintaining reliability in data analysis. A good classification system ensures that the same criteria are applied uniformly across all classes. For example, if geographical regions are being classified, the same boundaries and criteria should be consistently applied to avoid confusion or inconsistency in reporting.

9. Appropriateness

Good classification should be appropriate for the type of data being analyzed. The classification scheme should fit the nature of the data and the specific objectives of the analysis. Whether classifying data by geographical location, age, or income, the scheme should be meaningful and suited to the research question, ensuring that it provides valuable insights.

Quantitative and Qualitative Classification of Data

Data refers to raw, unprocessed facts and figures that are collected for analysis and interpretation. It can be qualitative (descriptive, like colors or opinions) or quantitative (numerical, like age or sales figures). Data is the foundation of statistics and research, providing the basis for drawing conclusions, making decisions, and discovering patterns or trends. It can come from various sources such as surveys, experiments, or observations. Proper organization and analysis of data are crucial for extracting meaningful insights and informing decisions across various fields.

Quantitative Classification of Data:

Quantitative classification of data involves grouping data based on numerical values or measurable quantities. It is used to organize continuous or discrete data into distinct classes or intervals to facilitate analysis. The data can be categorized using methods such as frequency distributions, where values are grouped into ranges (e.g., 0-10, 11-20) or by specific numerical characteristics like age, income, or height. This classification helps in summarizing large datasets, identifying patterns, and conducting statistical analysis such as finding the mean, median, or mode. It enables clearer insights and easier comparisons of quantitative data across different categories.

Features of Quantitative Classification of Data:

  • Based on Numerical Data

Quantitative classification specifically deals with numerical data, such as measurements, counts, or any variable that can be expressed in numbers. Unlike qualitative data, which deals with categories or attributes, quantitative classification groups data based on values like height, weight, income, or age. This classification method is useful for data that can be measured and involves identifying patterns in numerical values across different ranges.

  • Division into Classes or Intervals

In quantitative classification, data is often grouped into classes or intervals to make analysis easier. These intervals help in summarizing a large set of data and enable quick comparisons. For example, when classifying income levels, data can be grouped into intervals such as “0-10,000,” “10,001-20,000,” etc. The goal is to reduce the complexity of individual data points by organizing them into manageable segments, making it easier to observe trends and patterns.

  • Class Limits

Each class in a quantitative classification has defined class limits, which represent the range of values that belong to that class. For example, in the case of age, a class may be defined with the limits 20-30, where the class includes all data points between 20 and 30 (inclusive). The lower and upper limits are crucial for ensuring that data is classified consistently and correctly into appropriate ranges.

  • Frequency Distribution

Frequency distribution is a key feature of quantitative classification. It refers to how often each class or interval appears in a dataset. By organizing data into classes and counting the number of occurrences in each class, frequency distributions provide insights into the spread of the data. This helps in identifying which ranges or intervals contain the highest concentration of values, allowing for more targeted analysis.

  • Continuous and Discrete Data

Quantitative classification can be applied to both continuous and discrete data. Continuous data, like height or temperature, can take any value within a range and is often classified into intervals. Discrete data, such as the number of people in a group or items sold, involves distinct, countable values. Both types of quantitative data are classified differently, but the underlying principle of grouping into classes remains the same.

  • Use of Central Tendency Measures

Quantitative classification often involves calculating measures of central tendency, such as the mean, median, and mode, for each class or interval. These measures provide insights into the typical or average values within each class. For example, by calculating the average income within specific income brackets, researchers can better understand the distribution of income across the population.

  • Graphical Representation

Quantitative classification is often complemented by graphical tools such as histograms, bar charts, and frequency polygons. These visual representations provide a clear view of how data is distributed across different classes or intervals, making it easier to detect trends, outliers, and patterns. Graphs also help in comparing the frequencies of different intervals, enhancing the understanding of the dataset.

Qualitative Classification of Data:

Qualitative classification of data involves grouping data based on non-numerical characteristics or attributes. This classification is used for categorical data, where the values represent categories or qualities rather than measurable quantities. Examples include classifying individuals by gender, occupation, marital status, or color. The data is typically organized into distinct groups or classes without any inherent order or ranking. Qualitative classification allows researchers to analyze patterns, relationships, and distributions within different categories, making it easier to draw comparisons and identify trends. It is often used in fields such as social sciences, marketing, and psychology for descriptive analysis.

Features of  Qualitative Classification of Data:

  • Based on Categories or Attributes

Qualitative classification deals with data that is based on categories or attributes, such as gender, occupation, religion, or color. Unlike quantitative data, which is measured in numerical values, qualitative data involves sorting or grouping items into distinct categories based on shared qualities or characteristics. This type of classification is essential for analyzing data that does not have a numerical relationship.

  • No Specific Order or Ranking

In qualitative classification, the categories do not have a specific order or ranking. For instance, when classifying individuals by their profession (e.g., teacher, doctor, engineer), the categories do not imply any hierarchy or ranking order. The lack of a natural sequence or order distinguishes qualitative classification from ordinal data, which involves categories with inherent ranking (e.g., low, medium, high). The focus is on grouping items based on their similarity in attributes.

  • Mutual Exclusivity

Each data point in qualitative classification must belong to one and only one category, ensuring mutual exclusivity. For example, an individual cannot simultaneously belong to both “Male” and “Female” categories in a gender classification scheme. This feature helps to avoid overlap and ambiguity in the classification process. Ensuring mutual exclusivity is crucial for clear analysis and accurate data interpretation.

  • Exhaustiveness

Qualitative classification should be exhaustive, meaning that all possible categories are covered. Every data point should fit into one of the predefined categories. For instance, if classifying by marital status, categories like “Single,” “Married,” “Divorced,” and “Widowed” must encompass all possible marital statuses within the dataset. Exhaustiveness ensures no data is left unclassified, making the analysis complete and comprehensive.

  • Simplicity and Clarity

A good qualitative classification should be simple, clear, and easy to understand. The categories should be well-defined, and the criteria for grouping data should be straightforward. Complexity and ambiguity in categorization can lead to confusion, misinterpretation, or errors in analysis. Simple and clear classification schemes make the data more accessible and improve the quality of research and reporting.

  • Flexibility

Qualitative classification is flexible and can be adapted as new categories or attributes emerge. For example, in a study of professions, new job titles or fields may develop over time, and the classification system can be updated to include these new categories. Flexibility in qualitative classification allows researchers to keep the data relevant and reflective of changes in society, industry, or other fields of interest.

  • Focus on Descriptive Analysis

Qualitative classification primarily focuses on descriptive analysis, which involves summarizing and organizing data into meaningful categories. It is used to explore patterns and relationships within the data, often through qualitative techniques such as thematic analysis or content analysis. The goal is to gain insights into the characteristics or behaviors of individuals, groups, or phenomena rather than making quantitative comparisons.

Calculation of Interest

Calculating interest rate is not at all a difficult method to understand. Knowing to calculate interest rate can solve a lot of wages problems and save money while taking investment decisions. There is an easy formula to calculate simple interest rates. If you are aware of your loan and interest amount you can pay, you can do the largest interest rate calculation for yourself.

Using the simple interest calculation formula, you can also see your interest payments in a year and calculate your annual percentage rate.

Here is the step by step guide to calculate the interest rate.

How to calculate interest rate?

Know the formula which can help you to calculate your interest rate.

Step 1

To calculate your interest rate, you need to know the interest formula I/Pt = r to get your rate. Here,

I = Interest amount paid in a specific time period (month, year etc.)

P = Principle amount (the money before interest)

t = Time period involved

r = Interest rate in decimal

You should remember this equation to calculate your basic interest rate.

Step 2

Once you put all the values required to calculate your interest rate, you will get your interest rate in decimal. Now, you need to convert the interest rate you got by multiplying it by 100. For example, a decimal like .11 will not help much while figuring out your interest rate. So, if you want to find your interest rate for .11, you have to multiply .11 with 100 (.11 x 100).

For this case, your interest rate will be (.11 x 100 = 11) 11%.

Step 3

Apart from this, you can also calculate your time period involved, principal amount and interest amount paid in a specific time period if you have other inputs available with you.

Calculate interest amount paid in a specific time period, I = Prt.

Calculate the principal amount, P = I/rt.

Calculate time period involved t = I/Pr.

Step 4

Most importantly, you have to make sure that your time period and interest rate are following the same parameter.

For example, on a loan, you want to find your monthly interest rate after one year. In this case, if you put t = 1, you will get the final interest rate as the interest rate per year. Whereas, if you want the monthly interest rate, you have to put the correct amount of time elapsed. Here, you can consider the time period like 12 months.

Please remember, your time period should be the same time amount as the interest paid. For example, if you’re calculating a year’s monthly interest payments then, it can be considered you’ve made 12 payments.

Also, you have to make sure that you check the time period (weekly, monthly, yearly etc.) when your interest is calculated with your bank.

Step 5

You can rely on online calculators to get interest rates for complex loans, such as mortgages. You should also know the interest rate of your loan when you sign up for it.

For fluctuating rates, sometimes it becomes difficult to determine what a certain rate means. So, it is better to use free online calculators by searching “variable APR interest calculator”, “mortgage interest calculator” etc.

Calculation of interest when rate of interest and cash price is given

  • Where Cash Price, Interest Rate and Instalment are Given:

Illustration:

On 1st January 2003, A bought a television from a seller under Hire Purchase System, the cash price of which being Rs 10.450 as per the following terms:

(a) Rs 3,000 to be paid on signing the agreement.

(b) Balance to be paid in three equal installments of Rs 3,000 at the end of each year,

(c) The rate of interest charged by the seller is 10% per annum.

You are required to calculate the interest paid by the buyer to the seller each year.

Solution:

Note:

  1. there is no time gap between the signing of the agreement and the cash down payment of Rs 3,000 (1.1.2003). Hence no interest is calculated. The entire amount goes to reduce the cash price.
  2. The interest in the last installment is taken at the differential figure of Rs 285.50 (3,000 – 2,714.50).

(2) Where Cash Price and Installments are Given but Rate of Interest is Omitted:

Where the rate of interest is not given and only the cash price and the total payments under hire purchase installments are given, then the total interest paid is the difference between the cash price of the asset and the total amount paid as per the agreement. This interest amount is apportioned in the ratio of amount outstanding at the end of each period.

Illustration:

Mr. A bought a machine under hire purchase agreement, the cash price of the machine being Rs 18,000. As per the terms, the buyer has to pay Rs 4,000 on signing the agreement and the balance in four installments of Rs 4,000 each, payable at the end of each year. Calculate the interest chargeable at the end of each year.

(3) Where installments and Rate of Interest are Given but Cash Value of the Asset is Omitted:

In certain problems, the cash price is not given. It is necessary that we must first find out the cash price and interest included in the installments. The asset account is to be debited with the actual price of the asset. Under such situations, i.e. in the absence of cash price, the interest is calculated from the last year.

It may be noted that the amount of interest goes on increasing from 3rd year to 2nd year, 2nd year to 1st year. Since the interest is included in the installments and by knowing the rate of interest, we can find out the cash price.

Thus:

Let the cash price outstanding be: Rs 100

Interest @ 10% on Rs 100 for a year: Rs 10

Installment paid at the end of the year 110

The interest on installment price = 10/110 or 1/11 as a ratio.

Illustration:

I buy a television on Hire Purchase System.

The terms of payment are as follows:

Rs 2,000 to be paid on signing the agreement;

Rs 2,800 at the end of the first year;

Rs 2,600 at the end of the second year;

Rs 2,400 at the end of the third year;

Rs 2,200 at the end of the fourth year.

If interest is charged at the rate of 10% p.a., what was the cash value of the television?

Solution:

(4) Calculation of Cash Price when Reference to Annuity Table, the Rate of Interest and Installments are Given:

Sometimes in the problem a reference to annuity table wherein present value of the annuity for a number of years at a certain rate of interest is given. In such cases the cash price is calculated by multiplying the amount of installment and adding the product to the initial payment.

Illustration:

A agrees to purchase a machine from a seller under Hire Purchase System by annual installment of Rs 10,000 over a period of 5 years. The seller charges interest at 4% p.a. on yearly balance.

N.B. The present value of Re 1 p.a. for five years at 4% is Rs 4.4518. Find out the cash price of the machine.

Solution:

Installment Re 1 Present value = Rs 4.4518

Installment = Rs 10,000 Present value = Rs 4.4518 x 10,000 = Rs 44,518

Determinants of the Value of Bonds

Bonds are fixed-income securities that represent a loan from an investor to a borrower, typically a corporation or government. When purchasing a bond, the investor lends money in exchange for periodic interest payments and the return of the bond’s face value at maturity. Bonds are used to finance various projects and operations, providing a predictable income stream for investors.

Valuation of Bonds

The method for valuation of bonds involves three steps as follows:

Step 1: Estimate the expected cash flows

Step 2: Determine the appropriate interest rate that should be used to discount the cash flows.

& Step 3: Calculate the present value of the expected cash flows (step-1) using appropriate interest rate (step- 2) i.e. discounting the expected cash flows

Step 1: Estimating cash flows

Cash flow is the cash that is estimated to be received in future from investment in a bond. There are only two types of cash flows that can be received from investment in bonds i.e. coupon payments and principal payment at maturity.

The usual cash flow cycle of the bond is coupon payments are received at regular intervals as per the bond agreement, and final coupon plus principle payment is received at the maturity. There are some instances when bonds don’t follow these regular patterns. Unusual patterns maybe a result of the different type of bond such as zero-coupon bonds, in which there are no coupon payments. Considering such factors, it is important for an analyst to estimate accurate cash flow for the purpose of bond valuation.

Step 2: Determine the appropriate interest rate to discount the cash flows

Once the cash flow for the bond is estimated, the next step is to determine the appropriate interest rate to discount cash flows. The minimum interest rate that an investor should require is the interest available in the marketplace for default-free cash flow. Default-free cash flows are cash flows from debt security which are completely safe and has zero chances default. Such securities are usually issued by the central bank of a country, for example, in the USA it is bonds by U.S. Treasury Security.

Consider a situation where an investor wants to invest in bonds. If he is considering to invest corporate bonds, he is expecting to earn higher return from these corporate bonds compared to rate of returns of U.S. Treasury Security bonds. This is because chances are that a corporate bond might default, whereas the U.S. Security Treasury bond is never going to default. As he is taking a higher risk by investing in corporate bonds, he expects a higher return.

One may use single interest rate or multiple interest rates for valuation.

Step 3: Discounting the expected cash flows

Now that we already have values of expected future cash flows and interest rate used to discount the cash flow, it is time to find the present value of cash flows. Present Value of a cash flow is the amount of money that must be invested today to generate a specific future value. The present value of a cash flow is more commonly known as discounted value.

The present value of a cash flow depends on two determinants:

  • When a cash flow will be received i.e. timing of a cash flow &;
  • The required interest rate, more widely known as Discount Rate (rate as per Step-2)

First, we calculate the present value of each expected cash flow. Then we add all the individual present values and the resultant sum is the value of the bond.

The formula to find the present value of one cash flow is:

Present value formula for Bond Valuation

Present Value n = Expected cash flow in the period n/ (1+i) n

Here,

i = rate of return/discount rate on bond
n = expected time to receive the cash flow

By this formula, we will get the present value of each individual cash flow t years from now. The next step is to add all individual cash flows.

Bond Value = Present Value 1 + Present Value 2 + ……. + Present Value n

error: Content is protected !!