Knowledge Base Systems

A knowledge-based system (KBS) is a computer system which generates and utilizes knowledge from different sources, data and information. These systems aid in solving problems, especially complex ones, by utilizing artificial intelligence concepts. These systems are mostly used in problem-solving procedures and to support human learning, decision making and actions.

Knowledge-based systems are considered to be a major branch of artificial intelligence. They are capable of making decisions based on the knowledge residing in them, and can understand the context of the data that is being processed.

Knowledge-based systems broadly consist of an interface engine and knowledge base. The interface engine acts as the search engine, and the knowledge base acts as the knowledge repository. Learning is an essential component of knowledge-based systems and simulation of learning helps in the betterment of the systems. Knowledge-based systems can be broadly classified as CASE-based systems, intelligent tutoring systems, expert systems, hypertext manipulation systems and databases with intelligent user interface.

Compared to traditional computer-based information systems, knowledge-based systems have many advantages. They can provide efficient documentation and also handle large amounts of unstructured data in an intelligent fashion. Knowledge-based systems can aid in expert decision making and allow users to work at a higher level of expertise and promote productivity and consistency. These systems are considered very useful when expertise is unavailable, or when data needs to be stored for future usage or needs to be grouped with different expertise at a common platform, thus providing large-scale integration of knowledge. Finally, knowledge-based systems are capable of creating new knowledge by referring to the stored content.

The limitations of knowledge-based systems are the abstract nature of the concerned knowledge, acquiring and manipulating large volumes of information or data, and the limitations of cognitive and other scientific techniques.

The Evolution of Knowledge Bases

The term knowledge base was first introduced in the 1970s to distinguish from a database. At that time, databases stored flat data, transactions, and large, long-term data in computer code. By contrast, early knowledge bases aimed to provide structured knowledge that people could easily understand. This type of structured, codified information is also called an object model or ontological knowledge.

The advent of the internet changed knowledge bases considerably. It was no longer sufficient to store tables, small objects, and other straightforward, computer-coded data in computer memory. Instead, the demand for hypertext and multimedia increased, which led to more complex knowledge bases (i.e. web content management).

Today, knowledge based systems use knowledge bases, and are computer systems that aim to bridge the gaps between all the disparate types of knowledge (and file types) that people want to access.

Knowledge-Based Systems and Artificial Intelligence

While these systems are a subcategory of artificial intelligence, traditional knowledge-based systems are different in certain ways from AI. In some ways, AI is organized in a top-down, know everything system to capture and utilize statistical pattern detection methods, big data, deep learning, and data-mining. Examples of AI include approaches that involve neural network systems, which are a category of deep learning technology concentrated on pattern recognition and signal processing.

In contrast to conventional computer-based information systems, a KBS has several advantages. They provide excellent documentation while handling large quantities of unstructured data in an intelligent way. A KBS helps improve decision making and enables users to work at greater levels of expertise, productivity, and consistency. In addition, a KBS is useful when expertise is not available, or when information must be stored effectively for future use. It also provides a common platform for integrating knowledge on a large scale. Finally, a KBS is capable of generating new knowledge by using the stored data.

The architecture of a knowledge-based system is its inference engine and knowledge base. The knowledge base holds a collection of data, and the inference engine can deduce insights from the data stored in the knowledge base.

Knowledge-based systems work across a number of applications. For instance, in the medical field, a KBS can help doctors more accurately diagnose diseases. These systems are called clinical decision-support systems in the health industry. A KBS can also be used in areas as diverse as industrial equipment fault diagnosis, avalanche path analysis, and cash management.

Over the years, knowledge-based systems have been developed for a number of applications. MYCIN, for example, was an early knowledge-based system created to help doctors diagnose diseases. Healthcare has remained an important market for knowledge-based systems, which are now referred to as clinical decision-support systems in the health sciences context.

Knowledge-based systems have also been employed in applications as diverse as avalanche path analysis, industrial equipment fault diagnosis and cash management.

Expert Systems

The expert systems are the computer applications developed to solve complex problems in a particular domain, at the level of extra-ordinary human intelligence and expertise.

Characteristics of Expert Systems

  • High performance
  • Understandable
  • Reliable
  • Highly responsive

Capabilities of Expert Systems

  • Advising
  • Instructing and assisting human in decision making
  • Demonstrating
  • Deriving a solution
  • Diagnosing
  • Explaining
  • Interpreting input
  • Predicting results
  • Justifying the conclusion
  • Suggesting alternative options to a problem

They are incapable of

  • Substituting human decision makers
  • Possessing human capabilities
  • Producing accurate output for inadequate knowledge base
  • Refining their own knowledge

Components of Expert Systems

  • Knowledge Base
  • Inference Engine
  • User Interface

(i) Knowledge Base

It contains domain-specific and high-quality knowledge. Knowledge is required to exhibit intelligence. The success of any ES majorly depends upon the collection of highly accurate and precise knowledge.

Knowledge

The data is collection of facts. The information is organized as data and facts about the task domain. Data, information, and past experience combined together are termed as knowledge.

Components of Knowledge Base

The knowledge base of an ES is a store of both, factual and heuristic knowledge.

  • Factual Knowledge:It is the information widely accepted by the Knowledge Engineers and scholars in the task domain.
  • Heuristic Knowledge:It is about practice, accurate judgement, one’s ability of evaluation, and guessing.

Knowledge representation

It is the method used to organize and formalize the knowledge in the knowledge base. It is in the form of IF-THEN-ELSE rules.

Knowledge Acquisition

The success of any expert system majorly depends on the quality, completeness, and accuracy of the information stored in the knowledge base.

The knowledge base is formed by readings from various experts, scholars, and the Knowledge Engineers. The knowledge engineer is a person with the qualities of empathy, quick learning, and case analyzing skills.

He acquires information from subject expert by recording, interviewing, and observing him at work, etc. He then categorizes and organizes the information in a meaningful way, in the form of IF-THEN-ELSE rules, to be used by interference machine. The knowledge engineer also monitors the development of the ES.

(ii) Inference Engine

Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct, flawless solution.

In case of knowledge-based ES, the Inference Engine acquires and manipulates the knowledge from the knowledge base to arrive at a particular solution.

In case of rule based ES

  • Applies rules repeatedly to the facts, which are obtained from earlier rule application.
  • Adds new knowledge into the knowledge base if required.
  • Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies:

  • Forward Chaining
  • Backward Chaining
  1. Forward Chaining

It is a strategy of an expert system to answer the question, “What can happen next?”

Here, the Inference Engine follows the chain of conditions and derivations and finally deduces the outcome. It considers all the facts and rules, and sorts them before concluding to a solution.

This strategy is followed for working on conclusion, result, or effect. For example, prediction of share market status as an effect of changes in interest rates.

  1. Backward Chaining

With this strategy, an expert system finds out the answer to the question, “Why this happened?”

On the basis of what has already happened, the Inference Engine tries to find out which conditions could have happened in the past for this result. This strategy is followed for finding out cause or reason. For example, diagnosis of blood cancer in humans.

(iii) User Interface

User interface provides interaction between user of the ES and the ES itself. It is generally Natural Language Processing so as to be used by the user who is well-versed in the task domain. The user of the ES need not be necessarily an expert in Artificial Intelligence.

It explains how the ES has arrived at a particular recommendation. The explanation may appear in the following forms:

  • Natural language displayed on screen.
  • Verbal narrations in natural language.
  • Listing of rule numbers displayed on the screen.
  • The user interface makes it easy to trace the credibility of the deductions.

Requirements of Efficient ES User Interface

  • It should help users to accomplish their goals in shortest possible way.
  • It should be designed to work for user’s existing or desired work practices.
  • Its technology should be adaptable to user’s requirements; not the other way round.
  • It should make efficient use of user input.

Expert Systems Limitations

No technology can offer easy and complete solution. Large systems are costly, require significant development time, and computer resources. ESs have their limitations which include −

  • Limitations of the technology
  • Difficult knowledge acquisition
  • ES are difficult to maintain
  • High development costs

Applications of Expert System

The following table shows where ES can be applied.

Application Description
Design Domain Camera lens design, automobile design.
Medical Domain Diagnosis Systems to deduce cause of disease from observed data, conduction medical operations on humans.
Monitoring Systems Comparing data continuously with observed system or with prescribed behavior such as leakage monitoring in long petroleum pipeline.
Process Control Systems Controlling a physical process based on monitoring.
Knowledge Domain Finding out faults in vehicles, computers.
Finance/Commerce Detection of possible fraud, suspicious transactions, stock market trading, Airline scheduling, cargo scheduling.

Expert System Technology

There are several levels of ES technologies available. Expert systems technologies include −

(i) Expert System Development Environment: The ES development environment includes hardware and tools. They are:

  • Workstations, minicomputers, mainframes.
  • High level Symbolic Programming Languages such as LISProgramming (LISP) and PROgrammation en LOGique (PROLOG).
  • Large databases.

(iii) Tools: They reduce the effort and cost involved in developing an expert system to large extent.

  • Powerful editors and debugging tools with multi-windows.
  • They provide rapid prototyping
  • Have Inbuilt definitions of model, knowledge representation, and inference design.

(iii) Shells: A shell is nothing but an expert system without knowledge base. A shell provides the developers with knowledge acquisition, inference engine, user interface, and explanation facility. For example, few shells are given below:

  • Java Expert System Shell (JESS) that provides fully developed Java API for creating an expert system.
  • Vidwan, a shell developed at the National Centre for Software Technology, Mumbai in 1993. It enables knowledge encoding in the form of IF-THEN rules.

Development of Expert Systems: General Steps

The process of ES development is iterative. Steps in developing the ES include:

Step 1 Identify Problem Domain

  • The problem must be suitable for an expert system to solve it.
  • Find the experts in task domain for the ES project.
  • Establish cost-effectiveness of the system.

Step 2 Design the System

  • Identify the ES Technology
  • Know and establish the degree of integration with the other systems and databases.
  • Realize how the concepts can represent the domain knowledge best.

Step 3 Develop the Prototype

From Knowledge Base: The knowledge engineer works to −

  • Acquire domain knowledge from the expert.
  • Represent it in the form of If-THEN-ELSE rules.

Step 4 Test and Refine the Prototype

  • The knowledge engineer uses sample cases to test the prototype for any deficiencies in performance.
  • End users test the prototypes of the ES.

Step 5 Develop and Complete the ES

  • Test and ensure the interaction of the ES with all elements of its environment, including end users, databases, and other information systems.
  • Document the ES project well.
  • Train the user to use ES.

Step 6 Maintain the ES

  • Keep the knowledge base up-to-date by regular review and update.
  • Cater for new interfaces with other information systems, as those systems evolve.

Benefits of Expert Systems

  • Availability: They are easily available due to mass production of software.
  • Less Production Cost: Production cost is reasonable. This makes them affordable.
  • Speed: They offer great speed. They reduce the amount of work an individual puts in.
  • Less Error Rate: Error rate is low as compared to human errors.
  • Reducing Risk: They can work in the environment dangerous to humans.
  • Steady response: They work steadily without getting motional, tensed or fatigued.

Trends in Information Systems

Information system is an industry on the rise, and business structure, job growth, and emerging system will all shift in the coming years. Current trends are improving and presenting new functions in fields like medicine, entertainment, business, education, marketing, law enforcement, and more. Still, other much-anticipated system is only now coming on the scene.

Innovations in IT change internal company processes, but they are also altering the way customers experience purchasing and support not to mention basic practices in life, like locking up your home, visiting the doctor, and storing files. The following trends in information system are crucial areas to watch in 2019 and viable considerations that could influence your future career choices.

Current Trends in Information System

The latest technology methods and best practices of 2019 will primarily stem from current trends in information technology. Advancements in IT systems relate to what the industry is leaning toward or disregarding now. Information technology is advancing so rapidly that new developments are quickly replacing current projections.

  1. Cloud Computing

Cloud computing is a network of resources a company can access, and this method of using a digital drive increases the efficiency of organizations. Instead of local storage on computer hard drives, companies will be freeing their space and conserving funds. According to Forbes, 83 percent of enterprise workloads will be in the cloud by 2020, which means 2019 will show an increasing trend closing in on this statistic.

Cloud storage and sharing is a popular trend many companies have adopted and even implemented for employee interaction. A company-wide network will help businesses save on information technology infrastructure. Cloud services will also extend internal functions to gain revenue. Organizations that offer cloud services will market these for external products and continue their momentum.

Organizations will transfer their stored files across multiple sources using virtualization. Companies are already using this level of virtualization, but will further embrace it in the year to come. Less installation across company computers is another positive result of cloud computing because the Internet allows direct access to shared technology and information. The freedom of new products and services makes cloud computing a growing trend.

  1. Mobile Computing and Applications

Mobile phones, tablets, and other devices have taken both the business world and the personal realm by storm. Mobile usage and the number of applications generated have both skyrocketed in recent years. Now, 77 percent of Americans own smartphones — a 35 percent increase since 2011. Pew Research Center also shows using phones for online use has increased and fewer individuals use traditional Internet services like broadband.

Experts project mobile traffic to increase even further in 2019, and mobile applications, consumer capabilities, and payment options will be necessary for businesses. The fastest-growing companies have already established their mobile websites, marketing, and apps for maximized security and user-friendliness. Cloud apps are also available for companies to use for on-the-go capabilities.

  1. Big Data Analytics

Big data is a trend that allows businesses to analyze extensive sets of information to achieve variety in increasing volumes and growth of velocity. Big data has a high return on investment that boosts the productivity of marketing campaigns, due to its ability to enable high-functioning processing. Data mining is a way companies can predict growth opportunities and achieve future success. Examination of data to understand markets and strategies is becoming more manageable with advances in data analytic programs.

This practice in information technology can be observed for its potential in data management positions for optimal organizations. Database maintenance is a growing sector of technology careers. To convert various leads into paying customers, big data is an essential trend to continue following in 2019.

  1. Automation

Another current trend in the IT industry is automated processes. Automated processes can collect information from vendors, customers, and other documentation. Automated processes that check invoices and other accounts-payable aspects expedite customer interactions. Machine processes can automate repetitive manual tasks, rather than assigning them to employees. This increases organization-wide productivity, allowing employees to use their valuable time wisely, rather than wasting it on tedious work.

Automation can even produce more job opportunities for IT professionals trained in supporting, programming, and developing automated processes. Machine learning can enhance these automated processes for a continually developing system. Automated processes for the future will extend to groceries and other automatic payment methods to streamline the consumer experience.

Emerging Trends in Information System

Trends in information technology emerging in 2019 are new and innovative ways for the industry to grow. These movements in information technology are the areas expected to generate revenue and increase demand for IT jobs. Pay attention to these technological changes and unique products that enhance business operations.

  1. Artificial Intelligence and Smart Machines

Artificial intelligence harnesses algorithms and machine learning to predict useful patterns humans normally identify. Smart machines take human decision-making out of the equation so intelligent machines can instigate changes and bring forward solutions to basic problems. Companies are rallying around artificial intelligence in the workplace because it allows employees to use their abilities for the most worthwhile tasks, along with management of these smart machines for a more successful system.

The U.S. Army is applying artificial intelligence measures from Uptake Technologies to vehicles mainly used in peacekeeping missions for repair purposes. Their predictive software will reduce irregular maintenance and hone in on machine components that are more likely to deteriorate or get damaged. Predictive vehicle repairs can grow and extend to civilian purposes in the coming years.

AI face recognition is beginning to help with missing people reports, and it even helps identify individuals for criminal investigations when cameras have captured their images. According to the National Institute of Standards and Technology, face recognition is most effective when AI systems and forensic facial recognition experts team up. AI will continue to promote safety for citizens in the future as software improvements shape these applications.

Medical AI is another trend that reflects surprising success. Given patient information and risk factors, AI systems can anticipate the outcome of treatment and even estimate the length of a hospital visit. Deep learning is one way AI technology gets applied to health records to find the likelihood of a patient’s recovery and even mortality. Experts evaluate data to discover patterns in the patient’s age, condition, records, and more.

Home AI systems are also increasingly popular to expedite daily tasks like listening to tunes, asking for restaurant hours, getting directions, and even sending messages. Many problem-solving AI tools also help in the workplace, and the helpfulness of this technology will continue to progress in 2019.

AI careers are increasing in demand, but the nature of AI skills is shifting. AI projects have caught on throughout many businesses, but the outlook of company leaders is more than the projects are returning without properly equipped personnel to implement strategic AI advances. Positions related to AI are necessary to fulfill the potential of these enterprises.

  1. Virtual Reality

Technology that includes virtual reality is becoming prevalent. The software of virtual reality is making many industries prepared for various scenarios before entering them. The medical profession is projected to use virtual reality for some treatments and interactions with patients in the coming years. Virtual training sessions for companies can cut costs, fill in the need for personnel, and increase education.

According to Gartner, by 2023, virtual simulations for selected patients with specific illnesses will reduce emergency room visits in America by 20 million. These simulations will have intelligence capabilities, so virtual-reality care can still provide patients with proper attention.

Virtual-reality professionals will be in high demand in coming years as the technology catches on in various industries. Specialized fields are the main places where virtual reality has caught on, but experts project it will become more applicable to other technological advances. Backgrounds in optics and hardware engineering are particularly sought-after skills.

  1. Augmented Reality

Augmented reality is a more versatile and practical version of virtual reality, as it does not fully immerse individuals in an experience. Augmented reality features interactive scenarios that enhance the real world with images and sounds that create an altered experience. The most common current applications of this overlay of digital images on the surrounding environment include the recent Pokémon Go fad or the additions on televised football in the U.S.

Augmented reality can impact many industries in useful ways. Airports are implementing augmented-reality guides to help people get through their checks and terminals as quickly and efficiently as possible. Retail and cosmetics are also using augmented reality to let customers test products, and furniture stores are using this mode to lay out new interior design options.

The possibilities for augmented reality in the future revolve around mobile applications and health care solutions. Careers in mobile app development and design will be abundant, and information technology professionals can put their expertise to use in these interactive experiences.

  1. Blockchain Data

Blockchain data, like the new cryptocurrency Bitcoin, is a secure method that will continue to grow in popularity and use in 2019. This system allows you to input additional data without changing, replacing, or deleting anything. In the influx of shared data systems like cloud storage and resources, protecting original data without losing important information is crucial.

The authority of many parties keeps the data accounted for without turning over too much responsibility to certain employees or management staff. For transaction purposes, blockchain data offers a safe and straightforward way to do business with suppliers and customers. Private data is particularly secure with blockchain systems, and the medical and information technology industries can benefit equally from added protection.

  1. Cyber-Privacy and Security

Shared company systems and the growth of the Internet leave a high amount of personal and company data at risk to breaches. Redesigned systems and new firewalls and gateways will be added to the services companies need to bolster their technology. Cybersecurity is a concentration of IT that will help secure clouds and improve the trust between businesses and their vendors.

Recognition software will replace much of the password-protected systems companies use in 2019. Biometric measures and other safety protocols will increase the security of business practices, especially business-to-business interactions. Although authentication and recognition programs enhance protection, Internet of Things technology requires further development. The vulnerability of Internet of Things systems is already projected to contain risks the industry is not prepared for.

As the Internet and shared company networks increase, cybersecurity and privacy are vulnerable to infiltration. However, many companies are already aware of the projected weak spots in their technology. IT professionals need to address these issues and find practical and fortifying solutions.

  1. Internet of Things

The Internet of Things (IoT) is an emerging movement of products with integrated Wi-Fi and network connectivity abilities. Cars, homes, appliances, and other products can now connect to the Internet, making activities around the home and on the road an enhanced experience. Use of IoT allows people to turn on music hands-free with a simple command, or lock and unlock their doors even from a distance.

Many of these functions are helping organizations in customer interaction, responses, confirmations, and payments. Remote collection of data assists companies the most. IoT almost acts like a digital personal assistant. The intelligent features of some of these IoT products can aid in many company procedures. Voice recognition and command responses will allow you to access stored data on cloud services.

Data Base Management Systems: Concept

A database management system (DBMS) is a software application that allows users to efficiently store, manage, and manipulate vast amounts of data. It acts as an intermediary between users and the database, providing a structured and organized approach to data storage and retrieval. A DBMS offers several key features, including data definition, data manipulation, and data control.

In data definition, a DBMS provides tools for creating and modifying the structure of a database, specifying the data types, relationships, and constraints. This allows users to define the schema, or the logical and physical organization of the data. In data manipulation, a DBMS offers a wide range of operations to insert, retrieve, update, and delete data from the database. Users can execute complex queries and transactions to extract meaningful information from the stored data. Data control ensures data integrity, security, and concurrency by managing user access rights, enforcing data validation rules, and providing mechanisms for data backup and recovery.

A DBMS offers numerous advantages over traditional file-based systems. It provides a centralized and shared data repository, eliminating data redundancy and inconsistency. This promotes data integrity and reduces data maintenance efforts. Additionally, a DBMS supports concurrent access, allowing multiple users to access and manipulate the database simultaneously without conflicts. It provides a high level of data security by enforcing user authentication and authorization, ensuring that only authorized individuals can access the data. Furthermore, a DBMS offers scalability and performance optimizations, enabling efficient handling of large datasets and complex queries. Overall, a DBMS plays a critical role in modern data management by providing a robust, efficient, and secure platform for storing and manipulating data.

History of DBMS

The history of database management systems (DBMS) can be traced back to the 1960s when the concept of data management emerged as a response to the increasing need for efficient storage and retrieval of large volumes of data. During this time, hierarchical and network models were introduced as early DBMS prototypes. The hierarchical model organized data in a tree-like structure, with parent-child relationships, while the network model represented data as interconnected records. These early models laid the foundation for subsequent developments in the field.

In the 1970s, the relational model, proposed by Edgar F. Codd, revolutionized the field of DBMS. The relational model introduced the concept of tables, where data was organized into rows and columns, and relationships between tables were established using primary and foreign keys. This model offered a simple and flexible way to represent data and allowed for powerful query operations using structured query language (SQL).

The 1980s witnessed the rise of commercial DBMS products, such as Oracle, IBM DB2, and Microsoft SQL Server. These systems provided robust transaction management, concurrency control, and data integrity mechanisms. In the 1990s, object-oriented DBMS (OODBMS) emerged, which combined the features of DBMS with object-oriented programming concepts. OODBMS aimed to handle complex data types and support the storage of objects directly in the database.

With the advent of the internet and the proliferation of web applications in the late 1990s and early 2000s, there was a need for DBMS that could handle the massive amounts of unstructured data generated by these applications. This led to the development of NoSQL databases, which focused on scalability, high availability, and flexible data models. NoSQL databases, such as MongoDB and Cassandra, gained popularity for their ability to handle big data and real-time data processing.

Today, DBMS continues to evolve with advancements in technology. Cloud-based DBMS, in-memory databases, and distributed databases are some of the recent trends in the field. These developments have allowed for greater scalability, performance, and ease of use, enabling organizations to effectively manage and analyze vast amounts of data.

Decade Milestones
1960s Introduction of hierarchical and network models
1970s Introduction of the relational model and SQL
1980s Commercialization of DBMS products (Oracle, DB2, SQL Server)
1990s Emergence of object-oriented DBMS (OODBMS)
Late 1990s and early 2000s Rise of NoSQL databases for handling unstructured data
Present Evolution of DBMS with cloud-based, in-memory, and distributed databases

Characteristics of Database Management System

Data Independence:

DBMS provides a layer of abstraction between the physical representation of data and the applications that use it. This allows changes to the database structure without affecting the applications that access the data. There are two types of data independence: logical independence (ability to modify the logical schema without affecting the external schema) and physical independence (ability to modify the physical schema without affecting the logical schema).

Data Integrity:

DBMS enforces integrity constraints to ensure the accuracy and consistency of data. It enforces entity integrity (primary key constraints), referential integrity (foreign key constraints), domain integrity (data type and range constraints), and user-defined integrity rules. This prevents invalid or inconsistent data from entering the database.

Data Security:

DBMS provides mechanisms to control access to the database and protect the data from unauthorized access, modification, or destruction. It offers user authentication and authorization, allowing administrators to define user roles and privileges. Access control is enforced at the level of tables, views, and individual data items. DBMS also supports data encryption and auditing to ensure data privacy and track database activities.

Data Concurrency:

DBMS allows multiple users to access the database concurrently without data conflicts. It manages concurrent access through techniques like locking and transaction isolation levels. Locking mechanisms ensure that only one user can modify a piece of data at a time, while isolation levels define the visibility of changes made by one transaction to other concurrent transactions. This ensures data consistency and prevents data anomalies.

Data Recovery and Backup:

DBMS provides mechanisms for data recovery in case of system failures, crashes, or human errors. It maintains transaction logs and uses techniques like write-ahead logging and checkpoints to ensure durability of committed transactions. DBMS also supports backup and restore operations to safeguard data against disasters and allows for point-in-time recovery.

Query Optimization and Performance:

DBMS optimizes query execution by evaluating various query plans and selecting the most efficient one. It uses techniques like indexing, query rewriting, and caching to improve query performance. DBMS also provides tools for monitoring and tuning the database system to optimize performance based on workload and resource utilization.

Scalability and Extensibility:

DBMS is designed to handle growing amounts of data and users. It supports scaling up (vertical scaling) by upgrading hardware resources and scaling out (horizontal scaling) by adding more servers to distribute the workload. DBMS also allows for adding new data types, modifying the schema, and incorporating new features without disrupting the existing applications.

Data Integration and Sharing:

DBMS enables integration and sharing of data across multiple applications and users. It supports data modeling techniques like normalization and denormalization to eliminate data redundancy and ensure data consistency. DBMS provides features like views, stored procedures, and triggers to encapsulate complex logic and facilitate data sharing and integration among different applications.

Popular DBMS Software

  • Oracle Database
  • Microsoft SQL Server
  • MySQL
  • PostgreSQL
  • IBM DB2
  • MongoDB (NoSQL database)
  • Cassandra (NoSQL database)
  • Redis (in-memory data store)
  • SQLite (lightweight embedded database)
  • MariaDB (MySQL-compatible open-source database)

Types of DBMS

There are several types of DBMS based on their data models and architectural designs. Here are some common types of DBMS:

Relational DBMS (RDBMS):

This type of DBMS uses the relational data model, where data is organized into tables with rows and columns. It supports SQL for data manipulation and retrieval. Examples include Oracle Database, Microsoft SQL Server, MySQL, and PostgreSQL.

Object-Oriented DBMS (OODBMS):

OODBMS stores data as objects, which encapsulate both data and behavior. It extends the capabilities of traditional DBMS to handle complex data types and relationships. Examples include ObjectDB and Versant.

Hierarchical DBMS:

This type organizes data in a tree-like structure, where each record has a parent-child relationship. It is suitable for representing one-to-many relationships. IBM’s Information Management System (IMS) is an example of a hierarchical DBMS.

Network DBMS:

Network DBMS is similar to hierarchical DBMS but allows for more complex relationships by using a graph-like structure. It facilitates many-to-many relationships between records. Integrated Data Store (IDS) and Integrated Database Management System (IDMS) are examples of network DBMS.

Object-Relational DBMS (ORDBMS):

ORDBMS combines features of RDBMS and OODBMS. It extends the relational model to support object-oriented concepts, such as inheritance and encapsulation. PostgreSQL and Oracle Database offer object-relational capabilities.

NoSQL DBMS:

NoSQL (Not Only SQL) DBMS is designed to handle unstructured and semi-structured data, providing high scalability and flexibility. It deviates from the traditional relational model and focuses on key-value pairs, document-oriented, columnar, or graph-based data models. Examples include MongoDB, Cassandra, Couchbase, and Neo4j.

In-Memory DBMS:

In-Memory DBMS stores data primarily in main memory, offering fast data access and processing. It is optimized for high-performance applications that require real-time data processing. Examples include SAP HANA, Oracle TimesTen, and MemSQL.

Distributed DBMS:

Distributed DBMS manages data stored on multiple interconnected computers or servers. It provides transparency and coordination across the distributed environment, allowing users to access and manipulate data as if it were stored in a single location. Apache Hadoop, Google Bigtable, and CockroachDB are examples of distributed DBMS.

Advantages of DBMS

Data Centralization:

DBMS allows for the centralized storage of data in a structured manner. This eliminates data redundancy and ensures data consistency. Users can access and manipulate data from a single source, promoting data integrity and reducing data inconsistency.

Data Sharing and Accessibility:

DBMS enables multiple users to access and share data concurrently. It provides mechanisms for user authentication, authorization, and concurrency control, ensuring that users can access the data they need while preventing unauthorized access or data conflicts.

Data Consistency and Integrity:

DBMS enforces data integrity constraints, such as primary key, foreign key, and data type constraints. This ensures that data entered into the database is accurate and consistent. DBMS also provides transaction management to maintain the atomicity, consistency, isolation, and durability (ACID) properties of data.

Data Security:

DBMS offers robust security features to protect sensitive data. It provides user authentication and authorization mechanisms to control access to the data. DBMS allows administrators to define access privileges at the level of tables, views, or individual data items. Encryption, backup, and recovery mechanisms further enhance data security.

Data Independence and Flexibility:

DBMS provides logical and physical data independence. This means that changes to the database schema or physical storage structure can be made without affecting the applications that use the data. It offers flexibility in modifying and adapting the database as the requirements evolve.

Disadvantage of DBMS

Complexity and Cost:

Implementing and maintaining a DBMS can be complex and expensive. It requires specialized skills and expertise to design, deploy, and manage a database system. Organizations may need to invest in hardware, software licenses, and personnel training. The initial setup and ongoing maintenance costs can be significant.

Performance Overhead:

DBMS introduces performance overhead compared to direct file-based data management. The additional layers of abstraction and query processing can impact performance, especially for complex queries or large-scale data processing. Tuning and optimizing the DBMS configuration and queries are necessary to achieve optimal performance.

Single Point of Failure:

A DBMS can become a single point of failure for an entire system. If the DBMS experiences a failure or downtime, it can disrupt access to critical data and impact business operations. Implementing backup and recovery mechanisms is essential to mitigate the risk of data loss and ensure system availability.

Scalability Limitations:

While DBMS systems offer scalability features, there can be limitations in scaling up to handle massive volumes of data or high transaction loads. Scaling the system to accommodate growing data and user demands may require additional hardware, configuration changes, or distributed database architectures.

Vendor Dependency:

Adopting a specific DBMS often involves vendor lock-in, as migrating from one DBMS to another can be challenging and time-consuming. Organizations may rely on specific features, tools, or proprietary extensions provided by the chosen DBMS, which can limit their flexibility and make it difficult to switch to alternative solutions.

When not to use a DBMS system?

While a Database Management System (DBMS) offers numerous benefits, there are certain scenarios where using a DBMS may not be the most suitable option. Here are a few situations when an alternative approach might be more appropriate:

Small-scale and Simple Data Storage:

If the data volume is small and the storage requirements are straightforward, using a DBMS may introduce unnecessary complexity and overhead. In such cases, a file-based system or simple data structures (e.g., flat files, spreadsheets) might be sufficient and more efficient to manage and manipulate the data.

High Performance and Real-time Processing:

In applications that require extremely high performance and real-time processing, a DBMS may not provide the necessary speed or responsiveness. Direct memory access, specialized caching mechanisms, or custom data storage approaches may be more suitable in these situations to achieve the desired performance.

Frequent Changes to Data Structure:

If the data structure undergoes frequent and unpredictable changes, using a DBMS with a fixed schema might become cumbersome. Adapting the schema and managing data migrations can be time-consuming and complex. In such cases, a NoSQL database or a flexible data storage system may offer more agility and ease of change.

Limited Resources and Cost Constraints:

Implementing and maintaining a DBMS can require significant resources, including hardware, software licenses, and skilled personnel. If the organization has limited resources or tight budget constraints, investing in a DBMS might not be feasible. Instead, simpler data management solutions or cloud-based services could be more cost-effective.

Specific Performance or Functional Requirements:

In certain niche or specialized applications, where specific performance, functionality, or data processing requirements are crucial, a custom-built data management solution or specialized data storage systems may be more suitable. These solutions can be tailored to meet the specific needs of the application and provide optimized performance for the particular use case.

Ultimately, the decision to use a DBMS or an alternative approach depends on various factors such as data size, complexity, performance requirements, scalability, flexibility, and available resources. It’s essential to carefully evaluate the specific needs and constraints of the application to determine the most appropriate data management solution.

Components of Data Base Management System

Organizations produce and gather data as they operate. Contained in a database, data is typically organized to model relevant aspects of reality in a way that supports processes requiring this information.

The database management system can be divided into five major components, they are:

  • Hardware
  • Software
  • Data
  • Procedures
  • Database Access Language

Let’s have a simple diagram to see how they all fit together to form a database management system.

  1. Hardware

When we say Hardware, we mean computer, hard disks, I/O channels for data, and any other physical component involved before any data is successfully stored into the memory.

When we run Oracle or MySQL on our personal computer, then our computer’s Hard Disk, our Keyboard using which we type in all the commands, our computer’s RAM, ROM all become a part of the DBMS hardware.

  1. Software

This is the main component, as this is the program which controls everything. The DBMS software is more like a wrapper around the physical database, which provides us with an easy-to-use interface to store, access and update data.

The DBMS software is capable of understanding the Database Access Language and intrepret it into actual database commands to execute them on the DB.

  1. Data

Data is that resource, for which DBMS was designed. The motive behind the creation of DBMS was to store and utilise data.

In a typical Database, the user saved Data is present and meta data is stored.

Metadata is data about the data. This is information stored by the DBMS to better understand the data stored in it.

For example: When I store my Name in a database, the DBMS will store when the name was stored in the database, what is the size of the name, is it stored as related data to some other data, or is it independent, all this information is metadata.

  1. Procedures

Procedures refer to general instructions to use a database management system. This includes procedures to setup and install a DBMS, To login and logout of DBMS software, to manage databases, to take backups, generating reports etc.

  1. Database Access Language

Database Access Language is a simple language designed to write commands to access, insert, update and delete data stored in any database.

A user can write commands in the Database Access Language and submit it to the DBMS for execution, which is then translated and executed by the DBMS.

User can create new databases, tables, insert data, fetch stored data, update data and delete the data using the access language.

Users

  • Database Administrators: Database Administrator or DBA is the one who manages the complete database management system. DBA takes care of the security of the DBMS, it’s availability, managing the license keys, managing user accounts and access etc.
  • Application Programmer or Software Developer: This user group is involved in developing and desiging the parts of DBMS.
  • End User: These days all the modern applications, web or mobile, store user data. How do you think they do it? Yes, applications are programmed in such a way that they collect user data and store the data on DBMS systems running on their server. End users are the one who store, retrieve, update and delete data.

Centralized Database Systems

A centralized database is stored at a single location such as a mainframe computer. It is maintained and modified from that location only and usually accessed using an internet connection such as a LAN or WAN. The centralized database is used by organizations such as colleges, companies, banks etc.

Centralized data base is another type of database system which is located, maintained and stored in a single location such as mainframe computer. Data stored in the centralized DBMS is distributed across the network computers. It includes set of records which can easily be accessed from any location by using internet connection such as WAN and LAN.  Centralized database system is commonly used in the organizations such as banks, schools, colleges etc to manage all their data in an appropriate manner.

Advantages of Centralized Database Systems

  1. It allows for working on cross-functional projects

A centralized database speeds up the communication which occurs within an organization. Instead of having layers of administrative red tape in place to handle cross-functional projects between teams, the core design allows for those teams to come together whenever it is necessary. That makes it possible to absorb analytical data faster, complete specific tasks with more quality, and make more progress toward the vision, mission, or goals which have been established.

  1. It is easier to share ideas across analysts

Many businesses are setup in a way that creates silos for individuals and teams. By implementing a strategy which centralizes information and analytics, those silos begin to disappear. Instead of having multiple people working on the same projects or datasets independently, the organization can coordinate their work to have them collaborating more often. When everyone can share their ideas with the rest of the organization, the diversity created allows for a better growth potential.

  1. Analysts can be assigned to specific problems or projects centrally

There is a higher level of accountability found within a centralized database. That is because there is much more transparency about the policies and procedures being implemented. Each person can be assigned to a specific problem that the organization must address. Those with the correct authorities can monitor the progress of that person in solving the identified issue. Instead of routing through numerous sections, teams, or departments, all of the communication for each problem or project is routed centrally, which reduces confusion.

  1. Higher levels of security can be obtained

When there is long-term funding granted to a centralized database, then there is a higher level of data security which develops for the organization. That is because the information which is obtained by the company serves the entire company. Everyone involved with the information retention is bound by certain protocols or limits with their access, which limits the amount of data leakage which may occur. The end result is that the valuable data stays internal more often than going external.

  1. Higher levels of dependability are present within the system

There are fewer breakdowns of internal reporting systems when a centralized database is present. Instead of having multiple channels open, which may come to different conclusions on their own, there is one central channel which includes everyone. Each person can access the data they require, offer their opinion, and listen to the company-wide chatter as specific conclusions are created. Dependability happens because people get onto the same page faster.

  1. It reduces conflict

When there is a centralized database responsible for the collection and storage of data, then conflicts within the organization are reduced. That occurs because there are fewer people involved in the decision-making processes which involve the data. When there are top managers or assigned individuals responsible for this information, the lower-level managers and lower tier employees are insulated from the burdens of using the data inappropriately, which leads to a happier working environment.

  1. Organizations can act with greater speed

When there is a core database responsible for managing information, decisions on actions or strategies occur with greater speed because there are fewer layers of data which must be navigated. Leaders of the business are able to operate more efficiently because the communication processes are built naturally into the system. That makes it easier for everyone to evaluate the pros and cons of any decision they may face.

  1. It helps an organization stay close to a focused vision

The centralized database can be configured to keep tabs on an entire organization with regards to its one purpose or vision. Inconsistencies are eliminated from the workflows because the data being collected is intended for specific purposes which are clearly communicated to everyone involved.

Disadvantages of Centralized Database Systems

  1. It can become unresponsive to the needs of the business

There are heavy workload requirements which become necessary when using a centralized database. Individuals and teams find that the time constraints placed on them may be unreasonable for the expectations asked. In time, if these constraints are not addressed as they should be, a centralized database creates unresponsive teams that are focused on specific tasks instead of collaboration. The teams can essentially rebel against the system to create their own silos for self-protection.

  1. There are lower levels of location-based adaptability

Using a centralized database means you are trading movement efficiencies for less flexibility at the local level. If there are changes which occur locally that affect the business, this data must be sent to the centralized database. There is no option to control the data locally. That means response times to local customers or the community may not be as efficient as they could be, as there may not be any information in place to deal with a necessary response.

  1. It can have a negative impact on local morale

When there is a centralized database being used, the responsibilities of local managers are often reduced. That occurs because the structure of the company may forbid local managers from hiring their own employees. It may force them to use data from the centralized system which they feel has a negative impact at the local level. Instead of being able to make decisions immediately, they are forced to wait for data to come from the centralized database. For those who wish to experience higher levels of responsibility with their management role, the centralized process can be highly demoralizing.

  1. Succession planning can be limited with a centralized database

Because the information is managed by a centralized database, there is little need to work on the development of new or upcoming managers. The only form of succession planning that is necessary with this setup involves bringing in an individual to replace a core analyst at some point. Should top-level managers experience a family issue, health event, or some other issue which interferes with their job, there may be no one with the necessary experience to take over the position, which reduces the effectiveness of the centralized database.

  1. It reduces the amount of legitimate feedback received

A centralized database may provide transparency. It may lead to greater levels of communication. Those are not always positive benefits. When anyone can offer an opinion or feedback on information they have received, they often feel a responsibility to send a response. Many employees may have general knowledge about certain policies or procedures, but not have access to the full picture. They waste time creating feedback which isn’t needed, which wastes time for everyone who reads that feedback. Over time, this can lead to lower levels of productivity and higher levels of frustration.

  1. It may increase costs

When a centralized system is in place, there is a reliance on the accuracy of the data being collected. Even one small miscalculation could have a grave impact on the centralized database. That may result in higher fees for rushed deliveries, incorrect orders that are labeled as being correct, and unnecessary changes to potential inventory controlled by the organization. The costs of fixing a mistake from a decentralized system tend to be lower than fixing the mistakes generated by centralized systems.

  1. There is a risk of loss

When there is a centralized database, everything is stored within that database. What happens to that information if the database should be lost for some reason? Because there are no other database locations, an organization loses access immediately. That could create a long-term outage which may affect the overall viability of the company. Even with cloud backup systems in place and other protections available, there is always a risk of complete loss present when using a centralized database.

These centralized database advantages and disadvantages must be considered at the local level. For some organizations, the centralized structure makes sense because it brings people and teams together with a common bond to work toward a specific mission. For others, the system may create too many data points, bogging down overall productivity.

Distributed Database Systems

A distributed database is basically a database that is not limited to one system, it is spread over different sites, i.e, on multiple computers or over a network of computers. A distributed database system is located on various sited that don’t share physical components. This maybe required when a particular database needs to be accessed by various users globally. It needs to be managed such that for the users it looks like one single database.

A distributed database is a collection of multiple interconnected databases, which are spread physically across various locations that communicate via a computer network.

Goals of Distributed Database system

The concept of distributed database was built with a goal to improve:

(i) Reliability

In distributed database system, if one system fails down or stops working for some time another system can complete the task.

(ii) Availability

In distributed database system reliability can be achieved even if sever fails down. Another system is available to serve the client request.

(iii) Performance

Performance can be achieved by distributing database over different locations. So the databases are available to every location which is easy to maintain.

Advantages of Distributed Databases

Following are the advantages of distributed databases over centralized databases.

(i) Modular Development

If the system needs to be expanded to new locations or new units, in centralized database systems, the action requires substantial efforts and disruption in the existing functioning. However, in distributed databases, the work simply requires adding new computers and local data to the new site and finally connecting them to the distributed system, with no interruption in current functions.

(ii) More Reliable

In case of database failures, the total system of centralized databases comes to a halt. However, in distributed systems, when a component fails, the functioning of the system continues may be at a reduced performance. Hence DDBMS is more reliable.

(iii) Better Response

If data is distributed in an efficient manner, then user requests can be met from local data itself, thus providing faster response. On the other hand, in centralized systems, all queries have to pass through the central computer for processing, which increases the response time.

(iv) Lower Communication Cost

In distributed database systems, if data is located locally where it is mostly used, then the communication costs for data manipulation can be minimized. This is not feasible in centralized systems.

Types of distributed database System

The two types of distributed systems are as follows:

  1. Homogeneous distributed databases system

In a homogeneous database, all different sites store database identically. The operating system, database management system and the data structures used – all are same at all sites. Hence, they’re easy to manage.

Example: Consider that we have three departments using Oracle-9i for DBMS. If some changes are made in one department then, it would  update the other department also.

  1. Heterogeneous Database

In a heterogeneous distributed database, different sites can use different schema and software that can lead to problems in query processing and transactions. Also, a particular site might be completely unaware of the other sites. Different computers may use a different operating system, different database application. They may even use different data models for the database. Hence, translations are required for different sites to communicate.

Example: In the following diagram, different DBMS software are accessible to each other  using ODBC and JDBC.

Distributed Data Storage

There are 2 ways in which data can be stored on different sites. These are:

  1. Replication

In this approach, the entire relation is stored redundantly at 2 or more sites. If the entire database is available at all sites, it is a fully redundant database. Hence, in replication, systems maintain copies of data.

This is advantageous as it increases the availability of data at different sites. Also, now query requests can be processed in parallel.

However, it has certain disadvantages as well. Data needs to be constantly updated. Any change made at one site needs to be recorded at every site that relation is stored or else it may lead to inconsistency. This is a lot of overhead. Also, concurrency control becomes way more complex as concurrent access now needs to be checked over a number of sites.

  1. Fragmentation

In this approach, the relations are fragmented (i.e., they’re divided into smaller parts) and each of the fragments is stored in different sites where they’re required. It must be made sure that the fragments are such that they can be used to reconstruct the original relation (i.e, there isn’t any loss of data).

Fragmentation is advantageous as it doesn’t create copies of data, consistency is not a problem.

Fragmentation of relations can be done in two ways:

  • Horizontal fragmentation – Splitting by rows – The relation is fragmented into groups of tuples so that each tuple is assigned to at least one fragment.
  • Vertical fragmentation – Splitting by columns – The schema of the relation is divided into smaller schemas. Each fragment must contain a common candidate key so as to ensure lossless join.

Valuation of Securities

Security valuation is important to decide on the portfolio of an investor. All investment decisions are to be made on a scientific analysis of the right price of a share. Hence, an understanding of the valuation of securities is essential. Investors should buy underpriced shares and sell overpriced shares. Share pricing is thus an important aspect of trading. Conceptually, four types of valuation models are discernible.

They are:

(i) Book value,

(ii) Liquidating value,

(iii) Intrinsic value,

(iv) Replacement value as compared to market price.

(i) Book Value:

Book value of a security is an accounting concept. The book value of an equity share is equal to the net worth of the firm divided by the number of equity shares, where the net worth is equal to equity capital plus free reserves. The market value may fluctuate around the book value but may be higher if the future prospects are good.

(ii) Liquidating Value (Breakdown Value):

If the assets are valued at their breakdown value in the market and take net fixed assets plus current assets minus current liabilities as if the company is liquida­ted, then divide this by the number of shares, the resultant value is the liquidating value per share. This is also an accounting concept.

(iii) Intrinsic Value:

Market value of a security is the price at which the security is traded in the market and it is generally hovering around its intrinsic value. There are different schools of thought regarding the relationship of intrinsic value to the market price. Market prices are those which rule in the market, resulting from the demand and supply forces. Intrinsic price is the true value of the share, which depends on its earning capacity and its true worth. According to the fundamentalist approach to security valuation, the value of the security must be equal to the discounted value of the future income stream. The investor buys the securities when the market price is below this value.

Thus, for fundamentalists, earnings and dividends are the essential ingredients in determining the market value of a security. The discount rate used in such present value calculations is known as the required rate or return. Using this discount rate all future earnings are discounted back to the present to determine the intrinsic value.

According to the technical school, the price of a security is determined by the market demand and supply and it has very little to do with intrinsic values. The price movements follow certain trends for varying periods of time. Changes in trend represent the shifts in demand and supply which are predictable. The present trends are the offshoot of the past and history repeats itself according to this school.

According to efficient market hypothesis, in a fairly large security market where competitive conditions prevail, market prices are good proxies for intrinsic values. The security prices are determined after absorbing all the information avail­able to market participants. A share is thus generally worth whatever it is selling for in the market.

Generally, fundamental school is the basis for security valuation and many models are in use, based on these tenets.

(iv) Replacement Value:

When the company is liquidated and its assets are to be replaced by new ones, their prices being higher, the replacement value of a share will be different from the Breakdown value. Some analysts take this replacement value to compare with the market price.

Factors Influencing Security Valuation:

Security price depends on a host of factors like earnings per share, prospects of expansion, future earnings potential, possible issue of bonus or rights shares, etc. Some demand for a particular stock may give pleasure of power as a shareholder or prestige and control on management. Satisfaction and pleasure in the non-monetary sense cannot be considered in any practical and quantifiable sense. Many psychologi­cal and emotional factors influence the demand for a share.

In money terms, the return to a security on which its value depends consists of two components:

(i) Regular dividends or interest, and

(ii) Capital gains or losses in the form of changes in the capital value of the asset.

If the risk is high, return should also be high. Risk here refers to uncertainty of receipt of principal and interest or dividend and variability of this return.

The above returns are in terms of money received over a period of years. But money of Re. 1 received today is not the-same as money of Re. 1 received a year hence or two years hence etc. Money has time value, which suggests that earlier receipts are more desirable and valuable than later receipts. One reason for this is that earlier receipts can be reinvested and more receipts can be got than before. Here the principle operating is compound interest.

Thus, if Vn is the terminal value at the period n, P is the initial value, g is rate of compounding or return, n is the number of compounding periods, then Vn = P (1 + g)n.

If we reverse the process, the present value (P) can be thought of as reversing the compounding of values. This is discounting of the future values to the present day, represented by the formula-

P = Vn /(1+ g)n

Graham’s Approach to Valuation of Equity:

In their book on Security Analysis (1934) Benjamin Graham, and David Dodd, argued that future earnings power was the most important determinant of the value of stock. The original approach of identifying the undervalued stock is to find out the present value of forecasted dividends, and if the current market price is lower, it is undervalued. Alternatively, the analyst could determine the discount rate that makes the present value of the forecasted dividends equal to the current market price of the stock. If that rate (I.R.R. or discount rate) is more than the required rate for stocks of similar risks, then the stock is underpriced.

Graham and Dodd had argued that each dollar of dividends is worth four times as much as one dollar of retained earnings (in their original Book); but subsequent studies of data showed no justification for this. Graham and Rea have given some questions on Rewards and risks for financial data analysts to answer yes or no and on the basis of these ready to answer questions, they decided to locate undervalued stocks to buy and overvalued stocks to sell.

Such readymade formulas or questions are now out of favour due to various empirical studies which showed that earnings models are as good as or better than dividend models and that a number of factors are ably studied for common stock valuation and no unique formula or answer is justifiable.

Securities Valuation in India:

In India, the valuation of securities used to be done by the CCI for the purpose of fixing up the premium on new issues of existing companies. These guidelines used by CCI were applicable upto May 1992, when the CCI was abolished. Although the present market price will be taken into account a more rational price used to be worked out by the CCI on certain criteria.

Thus, the CCI used the concept of Net Asset Value (NAV) and Profit-Earning Capacity Value (PECV) as the basis for fixing up the premium on shares. The NAV is calculated by dividing the net worth by the number of equity shares. The net worth includes equity capital plus free reserves and surplus less contingent liabilities.

Present Value of Preference Shares

Preference shares give a fixed rate of dividend but without a maturity date. Preference shares are usually perpetuities but sometimes they do have maturity dates also.

The value of a preference share as a perpetuity is calculated thus:

V = D/i

V = Value of Preference Share

D = Annual Dividend per Preference Share

i = Discount Rate on Preference Shares

Present Value of Equity Shares

Method # 1. Based on Balance Sheet:

i. Book Value:

It is the net worth of a company divided by number of outstanding shares. Net worth is equal to paid-up equity capital plus reserves and surplus minus losses.

ii. Liquidation Value:

Liquidation value is different than a book valuation. In that it uses the value of the assets at liquidation, which is often less than market and sometimes book. Liabilities are deducted from the liquidation value of the assets to determine the liquidation value of the business. Liquidation value can be used to determine the bare bottom benchmark value of a business.

iii. Replacement Cost:

Replacement costs provide an alternative way of valuing a company’s assets. The replacement, or current, cost of an asset is the amount of money required to replace the asset by purchasing a similar asset with identical future service capabilities. In replacement cost, assets and liabilities are valued at their cost to replace.

Method # 2. Based on Dividends:

i. One Year Holding Period:

As per this model, the investor intends to purchase now, hold it for one year and sell it off at the end of one year. Thus, the investor would receive dividend of one year as well as the share price at the end of year one.

To value a stock, we have to first find the present discounted value of the expected cash flows.

Where,

Po = the current price of the stock

D1 = the dividend paid at the end of year 1

ke = required return on equity investments (Discounting factor)

P1 = the price at the end of period one

Let ke = 12%, Div = 0.16 and P1 = Rs.60.

Po = Rs. 53.71

If the stock was selling for Rs. 53.71 or less, the share should be purchased

ii. Multiple Year Holding Period:

As per this model, an investor may hold the shares for a number of years and sell it off at the end of it. Thus, he receives dividends for these periods as well as market price of the share after it.

Where,

Po = the current price of the stock

D1, D2, D3……. Dn = annual dividend paid at the end of year 1, 2, 3…n

ke = required return on equity investments (Discounting factor)

Pn = the price at the end of period n

For example:

If an investor expects to get Rs.3.5, Rs.4 and Rs.4.50 as dividend from a share during the next 3 years and hopes to sell it off at Rs.75 at the end of the third year, and if required rate of return is 15%, the present value of the share will be

iii. Constant Growth Model (Gordon’s Share Valuation Model):

As per this model, dividends will grow at the same rate (g) into the indefinite future and the discount rate (k) is greater than growth rate

Where,

k = discount factor

g = growth rate

Do = current dividend

D1 = Dividend at end of year one

For example:

Alembic Company has declared a dividend of Rs. 2.5 per share for the current year. The company has been following a policy of enhancing its dividends by 10% every year and is expected to continue its policy in future also. The investor’s required rate of return is 15%. The value of the share will be

iv. Multiple Growth Model (Also called as the Two Stage Growth Model):

The constant growth model has a very unrealistic assumption of constant growth. The growth may take place at varying rates. In the multiple growth model, the future time period is viewed as divisible into two different growth segments, the initial extraordinary growth period and the subsequent constant growth period.

Where,

Po = the current price of the stock

D1, D2, D3……. Dn = annual dividend paid at the end of year 1, 2, 3…n

ke = required return on equity investments (Discounting factor)

g = constant growth rate of dividends at the start of the second stage

For example:

Hindalco paid a dividend of Rs.1.75 per share during the current year. It is expected to pay a dividend of Rs.2 per share during the next year. Investors forecast a dividend of Rs.3 and Rs.3.5 per share respectively during the two subsequent years. After that, it is expected that annual dividends would grow at 10% per year into an indefinite future. The investor’s required rate of return is 20%. 

Method # 3. Other Approaches:

i. Price to Book Value Ratio:

The book value of a company is the value of the net assets expressed in the balance sheet. Net assets means total assets minus intangible assets and liabilities. This ratio gives the investor an idea of how much he is actually paying for the share.

ii. Earnings Multiplier Approach:

Under this approach, the value of equity share is estimated as follows:

Po = EPS × P/E ratio.

Where,

EPS = Earning Per share

P/E ratio = Price Earning Ratio

P/E ratio = Market price per share / earnings per share

iii. Price to Sales Ratio:

It is calculated by dividing a company’s current stock price by its revenue per share for the recent twelve months. This ratio reflects what the market is willing to pay per rupee of sales.

iv. Market Value Method:

This method is used only in case of listed companies, since they have a market value.

Market value of a company = No. of shares outstanding × market price per share

error: Content is protected !!