Ind AS-105: Non-current assets held for Sale and Discontinued operations

(A) Non-Current assets held for sale:

i) Are presented separately from other assets in the Balance Sheet.

Ii) As their classification will change.

Iii) The value will be principally recovered through sale transaction rather than through continuous use in operations of the entity.

(B) Results of Discontinuing Operations should be separately presented in the Statement of Profit and loss as it affects the ability of the entity to generate future cash flows.

The Ind AS requires:

  • Assets that meet the criteria to be classified as held for sale to be measured at the lower of carrying amount and fair value less costs to sell, and depreciation on such assets to cease.
  • Assets that meet the criteria to be classified as held for sale to be presented separately in the balance sheet.
  • The results of discontinued operations to be presented separately in the statement of profit.

An entity shall classify a non-current asset (or disposal group) as held for sale if its carrying amount will be recovered principally through a sale transaction rather than through continuing use.

For this to be the case, the asset (or disposal group) must be available for immediate sale in its present condition subject only to terms that are usual and customary for sales of such assets (or disposal groups) and its sale must be highly probable. Thus, an asset (or disposal group) cannot be classified as a non-current asset (or disposal group) held for sale, if the entity intends to sell it in a distant future.

For the sale to be highly probable, the appropriate level of management must be committed to a plan to sell the asset (or disposal group), and an active programme to locate a buyer and complete the plan must have been initiated. Further, the asset (or disposal group) must be actively marketed for sale at a price that   is reasonable in   relation to   its current fair value.

In addition, the sale should be expected to qualify for recognition as a completed sale within one year from the date of classification, except as permitted by paragraph 9 of the Standard, and actions required to  complete the plan should indicate that it is unlikely that significant changes to the plan will be made or that the plan will be withdrawn.

A discontinued operation is a component of an entity that either has been disposed of or is classified as held for sale and:

  • Is part of a single co-ordinated plan to dispose of a separate major line of business or geographical area of operations.
  • Represents a separate major line of business or geographical area of operations.
  • Is a subsidiary acquired exclusively with a view to

A component of an entity comprises operations and cash flows that can be clearly distinguished, operationally and for financial reporting purposes, from the rest of the entity.  In other words, a component of an entity will have been   a cash-generating unit or a group of cash-generating units while being held for use.

An entity shall not classify as held for sale a non-current asset (or disposal group) that is to be abandoned.

Exception to the period of one year

  • There must be sufficient evidences that the entity is still committed to it selling plan.
  • The delay must have been caused by the events or circumstances which are beyond the control of the entity.

Measurement:

  • Depreciation and amortization shall be immediately stopped from the moment the asset has been classified as held for sale.
  • An entity should measure a non-current asset (or disposal group) classified as held for sale at the lower of it carrying amount and fair value less costs to sell.
  • Interest and other expenses attributable to the liabilities of a disposal group classified as held for sale shall continue to be recognised.
  • Non-current asset (or disposal group) classified as held for distribution are also measured on same line as non-current asset (or disposal group) classified as held for sale.
  • When the sale is expected to occur beyond one year, the entity should measure the costs to sell at their present value. Any increase in the present value of the costs to sell that arises from the passage of time shall be presented in profit or loss as a financing cost.

Recognition Of Impairment Losses and Reversals:

  • An entity should recognise a gain for any subsequent increase in fair value less costs to sell of an asset, but not in excess of the cumulative impairment loss that has been recognised either in accordance with this Ind AS or previously in accordance with Ind AS 36, Impairment of Assets.
  • An entity should recognise an impairment loss for any initial or subsequent write-down of the asset (or disposal group) to fair value less costs to sell, to the extent that it has not been recognised in accordance with above.

Changes to Plan of Sale

  1. If an entity has classified an asset (or disposal group) as held for sale, but the held for sale criteria no longer met, the entity should cease to classify the asset (or disposal group) as held for sale.
  2. The entity shall measure a non-current asset that ceases to be classified as held for sale (or ceases to be included in a disposal group classified as held for sale) at the lower of: (a) its carrying amount before the asset (or disposal group) was classified as held for sale, adjusted for any depreciation, amortization or revaluations that would have been recognised had the asset (or disposal group) not been classified as held for sale; and (b) its recoverable amount at the date of the subsequent decision not to sell.

Classification of Accounting Theory

At present, a single universally accepted accounting theory does not exist in accounting. Instead, different theories have been proposed and continue to be proposed in the accounting literature.

(a) “Accounting Structure” Theory:

‘Accounting structure’ theory, known by different names such as classical theory, descriptive theory, traditional theory, attempt to explain current accounting practices and predict how accountants would react to certain situations or how they would report specific events.

This theory relates to the structure of the data collection process (accounting) and financial reporting. Thus, this theory is directly connected with accounting practices, i.e., what does exist or what accountants do.

The principal contributors to the accounting structure theory are identified chronologically as follows:

  • William A. Paton, Accounting Theory with Special Reference to Corporate Enterprise (1922).
  • Henry Rand Hatfield, Accounting; Its Principles and Problems (1927).
  • Henry W. Sweeney, Stabilized Accounting (1936).
  • Stephen Gilman, Accounting Concepts of Profit (1939).
  • A. Paton and A. C. Littleton, An Introduction to Corporate Accounting Standards (1940).
  • C. Littleton, Structure of Accounting Theory (1953).
  • Maurice Moonitz, the Basic Postulates of Accounting (1961).
  • Robert R. Sterling and Richard E. Flaherty, “The Role of Liquidity in Exchange Valuation,”
  • Accounting Review (July 1971).
  • Robert R. Sterling, John O. Tollefson, and Richard E. Flaherty,
  • “Exchange Valuation: An Empirical Test,” Accounting Review (Oct. 1972).
  • Yuji Ijiri, Theory of Accounting Measurement (1973).

This theory, basically concerned with observing the mechanical tasks which accountants traditionally perform, is based on the assumption that the objective of financial statement is associated with the stewardship concept of the management role, and the necessity of providing the owners of businesses with information relating to the manner in which their assets (resources) have been managed.

In this view, company directors occupy a position of responsibility and trust in regard to shareholders, and the discharge of these obligations requires the publication of annual financial reports to shareholders. Ijiri explains traditional accounting practice; however, he does place emphasis on the historical cost system.

Sterling advises “to observe accountants’ actions and rationalise these actions by subsuming them under generalized principles.” Theories explaining traditional accounting practice are desirable to obtain greater insight into current accounting practices, permit a more precise evaluation of traditional theory and an evaluation of existing practices that do not correspond to traditional theory.

Such theories relating to the structure of accounting can be tested for internal logical consistency, or they can be tested to see whether or not they actually can predict what accountants do.

Limitations:

(1) The ‘accounting structure’ theory concentrates on accounting practices and the behaviour of practising accountants. The accounting practice begins with observable occurrences (transactions), translates them into symbolic form (money values) and makes them inputs (e.g., sales, costs) into the formal accounting system where they are manipulated into outputs (financial statements).

Accounting practices followed in this way may not reflect the real business situation and real world phenomena. The traditional theory is not concerned with judging the usefulness of the output of accounting practice, but concentrates upon judging the means of manipulation of input into output.

(2) Inconsistencies in traditional theory have given rise to alternative accepted principles and procedures which give significantly divergent reported results. Accrual accounting results in allocations which provide a variety of alternative accounting methods for each major event e.g., LIFO and FIFO valuations of stock and different accountants may prefer different methods depending upon how they are affected. Moreover, the traditional approach is inconsistent with theories developed in related disciplines. For example, the historical cost concept of valuation is externally inconsistent with current value concepts.

Finally, good theory should provide for research to assist advances in knowledge. The conventional approach tends to inhibit change, and by concentrating upon generally accepted accounting principles makes the relationship between theory and practice a circular one.

(b) “Interpretational” Theory:

Truly speaking, ‘accounting structure’ and ‘interpretational’ theories are part of the classical accounting theory (model). The principal writers under ‘accounting structure’ such as Hatfield, Littleton, Paton and Littleton, Sterling and Ijiri are mainly positivist, inductive writers, concerned with traditional accounting practice in terms of historical cost system, with some deviations such as the lower of cost or market.

Accounting practices under accounting structure theory are the result of recording business events as they take place. Such practices lack application of judgement and consequences. Interpretational theory attempts to give some meaning to accounting practice.

The theory based on “accounting structure” only, although logically formulated, does not require meaningful interpretation of accounting practices and analysis of accounting activities.

Interpretational theory emphasises on giving interpretations and meaning as accounting practices are followed. This theory provides a suitable basis for evaluating accounting practices, resolving accounting issues and making accounting propositions.

The principle writers in interpretational theory are the following:

  • John B. Canning, The Economics of Accountancy (1929).
  • Sidney S. Alexander, Income Measurement in a Dynamic Economy (1950).
  • Edgar O. Edwards and Philip W. Bell, The Theory and Measurement of Business Income (1961).
  • Robert T. Sprouse and Maurice Moonitz, A Tentative Set of Board Accounting Principles for Business Enterprises (1962).

The above writers in interpretational theory are more analysts and explicators than advocates and preachers. They analyse and assess what accountants do and seek to do, they undertake to explain a phenomenon to accountants, and help in understanding the implications of using accounting concepts in the real business situation. For example, Sprouse and Moonitz suggest that the assets valuations should be made in terms of their future services.

In “accounting structure” theory, accounting concepts are un-interpreted and do not reflect any meaning except actual data resulting from following specific accounting procedures. Asset valuations, for example, are the result of following a specific method of inventory valuation and depreciation.

Similarly, specific rules are followed for the measurement of these revenues and expenses. Interpretational theory gives meaningful interpretations to these concepts and rules and evaluate alternative accounting procedures in terms of these interpretations and meanings. For example, it can be said that FIFO is the most appropriate if objective is to measure current value of inventories.

In this case, selection of FIFO in interpretational theory is made with a view to suggest specific result and interpretation. It is argued that empirical enquiry should be made to determine whether information users attach the same interpretations and meanings which are intended by producers of information.

Items of information vary as to degree of interpretation; some items by nature reflect higher degree of interpretation and some items are subject to many interpretations. For example, the item cash in balance sheet is fairly well understood by users to mean what prepares intend it to mean.

On the contrary, the items like deferred expenses and goodwill may not reflect any specific interpretation. The role of interpretational theories is to build a correspondence between the interpretations of producers and users as to accounting information.

This theory attempts to find ways to improve the meaning and interpretations of accounting information in terms of experiences about human behaviour and information processing capacity.

‘Accounting structure’ and interpretational theories both are known as classical accounting models. The writers (mentioned above) under both the theories are, in every sense, reformers. Interpretational theorists differ from ‘accounting structure’ theorists more in degree than in kind; the former are motivated less by missionary zeal than by a desire to analyse, criticize, and suggest, and are primarily deductivists.

Many of the prominent interpretational theorists advocate current cost or values. It is said that interpretational theorists may have observed the behaviour of investors and other economic decision makers and concluded with a validated hypothesis that such decisions-makers seek current value, not historical cost, information.

In spite of the difference in emphasis of ‘traditional’ and ‘interpretational’ theorists, broadly, both are concerned with designing financial reports that communicate relevant information to users of accounting information.

(c) “Decision-Usefulness” Theory:

The decision-usefulness theory emphasises the relevance of the information communicated to decision making and on the individual and group behaviour caused by the communication of information.

Accounting is assumed to be action-oriented its purpose is to influence action, that is, behaviour; directly through the informational content of the message conveyed and indirectly through the behaviour of preparers of accounting reports.

The focus is on the relevance of information being communicated to decision-makers and the behaviour of different individuals or groups as a result of the presentation of accounting information. The most important users of accounting reports presented to those outside the firm are generally considered to include investors, creditors, customers, and government authorities.

However, decision usefulness can also take into consideration the effect of external reports on the decisions of management and the feedback effect on the actions of accountants and auditors. Since accounting is considered to be a behavioural process, this theory applies behavioural science to accounting.

Due to this, decision-usefulness theory is sometimes referred to as behavioural theory also. In the broader perspective, decision-usefulness studies analyses behaviour of users of information. A behavioural theory attempts to measure, and evaluate the economic, psychological and sociological effects of alternative accounting procedures and modes of financial reporting.

In adopting the decision usefulness theory or approach, two major aspects or questions must be addressed.

First, who are the users of financial statements? Obviously, there are many users. It is helpful to categorize them into broad groups, such as investors, lenders, managers, employees, customers, governments, regulatory authorities, suppliers etc. These groups are called constituencies of accounting.

Second, what are the decision models or problems of financial statement users? By understanding these decision models preparers will be in a better position to meet the information needs of the various constituencies. Financial statements can then be prepared with these information needs in mind and in this way financial statements will lead to improved decision making and are made more useful.

Role of Accounting theory

Accounting theory may also be used to explain existing practices to obtain a better understanding of them. But the most important goal of accounting theory should be to provide a coherent set of logical principles that form the general frame of reference for the evaluation and development of sound accounting practices.

Accounting theory is that branch of accounting which consists of the systematic statement of principles and methodology. However, theory cannot be divorced from practice. The theory underlies practices, explains and attempts to predict them. There is not and cannot be any basic contradiction between theory and facts.

A theory is an explanation. However, every explanation is not a theory in the scientific meaning of the word. The objective of accounting theory is to explain and predict accounting practice. Explanation provides reasons for observed practice. For example, an accounting theory should explain why certain firms use LIFO method of inventory rather than the FIFO method.

Prediction of accounting practices means that the theory can also predict unobserved accounting phenomena. Unobserved phenomena are not necessarily future phenomena; they include phenomena that have occurred but on which systematic evidence has not been collected.

It is significant to observe that accounting theory may be based on empirical evidence and practices as well as accounting theory may be formulated using hypothetical and speculative interpretations.

(1) Accounting theory has a great amount of influence on accounting and reporting practices and thus serves the informational requirements of the external users.

In fact, accounting theory provides a framework for:

(i) Evaluating current financial accounting practice and

(ii) Developing new practice.

(2) Secondly, accounting theory literature is useful to accounting policy makers who are interested in making the accounting information useful. The researches, empirical evidence and investigation can be used and incorporated by the policy makers in formulating accounting policies. Theories are helpful as they apprise policy makers of the underlying issues and clarify the trade-offs implicit in various theory approaches.

External audit requirements

An external auditor performs an audit, in accordance with specific laws or rules, of the financial statements of a company, government entity, other legal entity, or organization, and is independent of the entity being audited. Users of these entities’ financial information, such as investors, government agencies, and the general public, rely on the external auditor to present an unbiased and independent audit report.

The manner of appointment, the qualifications, and the format of reporting by an external auditor are defined by statute, which varies according to jurisdiction. External auditors must be members of one of the recognised professional accountancy bodies. External auditors normally address their reports to the shareholders of a corporation. In the United States, certified public accountants are the only authorized non-governmental external auditors who may perform audits and attestations on an entity’s financial statements and provide reports on such audits for public review. In the UK, Canada and other Commonwealth nations Chartered Accountants and Certified General Accountants have served in that role.

For public companies listed on stock exchanges in the United States, the Sarbanes-Oxley Act (SOX) has imposed stringent requirements on external auditors in their evaluation of internal controls and financial reporting. In many countries external auditors of nationalized commercial entities are appointed by an independent government body such as the Comptroller and Auditor General. Securities and Exchange Commissions may also impose specific requirements and roles on external auditors, including strict rules to establish independence.

The objectives of an external audit are to determine:

  • Whether the client’s accounting records have been prepared in accordance with the applicable accounting framework.
  • The accuracy and completeness of the client’s accounting records.
  • Whether the client’s financial statements present fairly its results and financial position.

Difference from internal auditor

Internal auditors who are members of a professional organization would be subject to the same code of ethics and professional code of conduct as applicable to external auditors. They differ, however, primarily in their relationship to the entities they audit. Internal auditors, though generally independent of the activities they audit, are part of the organization they audit, and report to management. Typically, internal auditors are employees of the entity, though in some cases the function may be outsourced. The internal auditor’s primary responsibility is appraising an entity’s risk management strategy and practices, management (including IT) control frameworks and governance processes. They are also responsible for the internal control procedures of an organization and the prevention of fraud.

If an external auditor detects fraud, it is their responsibility to bring it to the management’s attention and consider withdrawing from the engagement if management does not take appropriate actions. Normally, external auditors review the entity’s information technology control procedures when assessing its overall internal controls. They must also investigate any material issues raised by inquiries from professional or regulatory authorities, such as the local taxing authority.

Business continuity planning

Business continuity may be defined as “the capability of an organization to continue the delivery of products or services at pre-defined acceptable levels following a disruptive incident”, and business continuity planning (or business continuity and resiliency planning) is the process of creating systems of prevention and recovery to deal with potential threats to a company.[4] In addition to prevention, the goal is to enable ongoing operations before and during execution of disaster recovery. Business continuity is the intended outcome of proper execution of both business continuity planning and disaster recovery.

Business continuity planning is the process involved in creating a system of prevention and recovery from potential threats to a company. The plan ensures that personnel and assets are protected and are able to function quickly in the event of a disaster.

Several business continuity standards have been published by various standards bodies to assist in check listing ongoing planning tasks.

An organization’s resistance to failure is “the ability to withstand changes in its environment and still function“. Often called resilience, it is a capability that enables organizations to either endure environmental changes without having to permanently adapt, or the organization is forced to adapt a new way of working that better suits the new environmental conditions.

Key features of an effective business continuity plan:

  • Organization: Objects that are related to the structure, skills, communications and responsibilities of its employees.
  • Strategy: Objects that are related to the strategies used by the business to complete day-to day activities while ensuring continuous operations.
  • Applications and data: Objects that are related to the software necessary to enable business operations, as well as the method to provide high availability that is used to implement that software.
  • Technology: Objects that are related to the systems, network and industry-specific technology necessary to enable continuous operations and backups for applications and data.
  • Processes: Objects that are related to the critical business process necessary to run the business, as well as the IT processes used to ensure smooth operations.
  • Facilities: Objects that are related to providing a disaster recovery site if the primary site is destroyed.

Planners must have information about:

  • Equipment
  • Supplies and suppliers
  • Locations, including other offices and backup/work area recovery (WAR) sites
  • Documents and documentation, including which have off-site backup copies:
  • Business documents
  • Procedure documentation

Tiers of preparedness

SHARE’s seven tiers of disaster recovery:

Tier 0: No off-site data; Businesses with a Tier 0 Disaster Recovery solution have no Disaster Recovery Plan. There is no saved information, no documentation, no backup hardware, and no contingency plan. Typical recovery time: The length of recovery time in this instance is unpredictable. In fact, it may not be possible to recover at all.

Tier 1: Data backup with no Hot Site; Businesses that use Tier 1 Disaster Recovery solutions back up their data at an off-site facility. Depending on how often backups are made, they are prepared to accept several days to weeks of data loss, but their backups are secure off-site. However, this Tier lacks the systems on which to restore data. Pickup Truck Access Method (PTAM).

Tier 2: Data backup with Hot Site; Tier 2 Disaster Recovery solutions make regular backups on tape. This is combined with an off-site facility and infrastructure (known as a hot site) in which to restore systems from those tapes in the event of a disaster. This tier solution will still result in the need to recreate several hours to days worth of data, but it is less unpredictable in recovery time. Examples include: PTAM with Hot Site available, IBM Tivoli Storage Manager.

Tier 3: Electronic vaulting; Tier 3 solutions utilize components of Tier 2. Additionally, some mission-critical data is electronically vaulted. This electronically vaulted data is typically more current than that which is shipped via PTAM. As a result there is less data recreation or loss after a disaster occurs.

Tier 4: Point-in-time copies • Tier 4 solutions are used by businesses that require both greater data currency and faster recovery than users of lower tiers. Rather than relying largely on shipping tape, as is common in the lower tiers, Tier 4 solutions begin to incorporate more disk-based solutions. Several hours of data loss is still possible, but it is easier to make such point-in-time (PIT) copies with greater frequency than data that can be replicated through tape-based solutions.

Tier 5: Transaction integrity; Tier 5 solutions are used by businesses with a requirement for consistency of data between production and recovery data centers. There is little to no data loss in such solutions; however, the presence of this functionality is entirely dependent on the application in use.

Tier 6: Zero or little data loss; Tier 6 Disaster Recovery solutions maintain the highest levels of data currency. They are used by businesses with little or no tolerance for data loss and who need to restore data to applications rapidly. These solutions have no dependence on the applications to provide data consistency.

Tier 7: Highly automated, business-integrated solution; Tier 7 solutions include all the major components being used for a Tier 6 solution with the additional integration of automation. This allows a Tier 7 solution to ensure consistency of data above that of which is granted by Tier 6 solutions. Additionally, recovery of the applications is automated, allowing for restoration of systems and applications much faster and more reliably than would be possible through manual Disaster Recovery procedures.

Developing a Business Continuity Plan

  • Business Impact Analysis: Here, the business will identify functions and related resources that are time-sensitive.
  • Recovery: In this portion, the business must identify and implement steps to recover critical business functions.
  • Organization: A continuity team must be created. This team will devise a plan to manage the disruption.
  • Training: The continuity team must be trained and tested. Members of the team should also complete exercises that go over the plan and strategies.

Governance, Risk & Compliance

Governance, risk management and compliance (GRC) is the term covering an organization’s approach across these three practices: governance, risk management, and compliance. The first scholarly research on GRC was published in 2007 where GRC was formally defined as “the integrated collection of capabilities that enable an organization to reliably achieve objectives, address uncertainty and act with integrity.” The research referred to common “keep the company on track” activities conducted in departments such as internal audit, compliance, risk, legal, finance, IT, HR as well as the lines of business, executive suite and the board itself.

GRC

Governance describes the overall management approach through which senior executives direct and control the entire organization, using a combination of management information and hierarchical management control structures. Governance activities ensure that critical management information reaching the executive team is sufficiently complete, accurate and timely to enable appropriate management decision making, and provide the control mechanisms to ensure that strategies, directions and instructions from management are carried out systematically and effectively.

Obligational awareness refers to the ability of the organisation to make itself aware of all of its mandatory and voluntary obligations, namely relevant laws, regulatory requirements, industry codes and organizational standards, as well as standards of good governance, generally accepted best practices, ethics and community expectations. These obligations may be financial, strategic or operational where operational includes such diverse areas as property safety, product safety, food safety, workplace health and safety, asset maintenance, etc.

Risk management is the set of processes through which management identifies, analyzes, and, where necessary, responds appropriately to risks that might adversely affect realization of the organization’s business objectives. The response to risks typically depends on their perceived gravity, and involves controlling, avoiding, accepting or transferring them to a third party, whereas organizations routinely manage a wide range of risks (e.g. technological risks, commercial/financial risks, information security risks etc.).

Compliance means conforming with stated requirements. At an organizational level, it is achieved through management processes which identify the applicable requirements (defined for example in laws, regulations, contracts, strategies and policies), assess the state of compliance, assess the risks and potential costs of non-compliance against the projected expenses to achieve compliance, and hence prioritize, fund and initiate any corrective actions deemed necessary. Compliance administration refers to the administrative exercise of keeping all the compliance documents up to date, maintaining the currency of the risk controls and producing the compliance reports.

Benefits of GRC

  • More optimal IT investments
  • Improved decision-making
  • Elimination of silos
  • Reduced fragmentation among divisions and departments

The Capability Model is made up of four components:

LEARN about the organization context, culture and key stakeholders to inform objectives, strategy and actions.

ALIGN strategy with objectives, and actions with strategy, by using effective decision-making that addresses values, opportunities, threats and requirements.

PERFORM actions that promote and reward things that are desirable, prevent and remediate things that are undesirable, and detect when something happens as soon as possible.

REVIEW the design and operating effectiveness of the strategy and actions, as well as the ongoing appropriateness of objectives to improve the organization.

These components outline an iterative continuous improvement process to achieve principled performance and are further decomposed into elements which are then supported by practices, actions and controls. The actions and controls are classified in three types, which organizations can select a mix dependent on their context:

  • Proactive
  • Detective
  • Responsive

Best practice analysis

A best practice is a method or technique that has been generally accepted as superior to any alternatives because it produces results that are superior to those achieved by other means or because it has become a standard way of doing things, e.g., a standard way of complying with legal or ethical requirements.

Best practices are a set of guidelines, ethics, or ideas that represent the most efficient or prudent course of action in a given business situation.

Best practices may be established by authorities, such as regulators, self-regulatory organizations (SROs), or other governing bodies, or they may be internally decreed by a company’s management team.

Best practices are used to maintain quality as an alternative to mandatory legislated standards and can be based on self-assessment or benchmarking. Best practice is a feature of accredited management standards such as ISO 9000 and ISO 14001.

Some consulting firms specialize in the area of best practice and offer ready-made templates to standardize business process documentation. Sometimes a best practice is not applicable or is inappropriate for a particular organization’s needs. A key strategic talent required when applying best practice to organizations is the ability to balance the unique qualities of an organization with the practices that it has in common with others.

Good operating practice is a strategic management term. More specific uses of the term include good agricultural practices, good manufacturing practice, good laboratory practice, good clinical practice and good distribution practice.

Best practices serve as a general framework for a variety of situations. For instance, in businesses that produce physical products, best practices that highlight efficient ways to complete tasks might be given to employees. Best practices lists may also outline safety procedures in order to minimize employee injuries.

For corporate accountants, the generally accepted accounting principles (GAAP) represent best practices. GAAP is a common set of accounting standards which aim to improve the clarity, consistency, and comparability of the communication of financial information.

GAAP facilitates the cross-comparison of financial information across different companies within the same sector. This benefits investors and the companies they invest in by promoting transparency.

Investment managers may follow best practices when handling a client’s money by prudently investing in a well-diversified portfolio and adhering to a client’s risk tolerances, time horizons, and retirement goals.

Business Process Improvement

Business process improvement (BPI) is an approach used to identify and evaluate inefficiencies within the organization. It redesigns existing business tasks, improving their effectiveness, enhances the workflows involved, and optimizes performance.

Operational: The most popular tasks repeating every day. Examples: opening accounts, reporting, manufacturing, logistics

Management: Focus on human resource development, budgeting, corporate governance

Supporting: All other tasks not classified into the previous categories, like recruiting, accounting, tech support, and others.

BPI can be attributed to a number of reasons including:

  • Reduce the time required to get work done
  • Eliminate waste & friction in processes
  • Ensure better compliance with rules and regulations

BPI structured initiative and it works in the following way:

  • Analyze processes and identify areas of potential improvement.
  • Identify existing processes within your organization.
  • Run various simulations about any changes you can apply to these processes and their effect on the business.
  • Focus on redesigning and reorganizing processes.
  • Assess and reassess the people behind those processes.

Capacity management and analysis

Capacity management refers to the act of ensuring a business maximizes its potential activities and production output at all times, under all conditions. The capacity of a business measures how much companies can achieve, produce, or sell within a given time period.

Capacity management’s goal is to ensure that information technology resources are sufficient to meet upcoming business requirements cost-effictively. One common interpretation of capacity management is described in the ITIL framework. ITIL version 3 views capacity management as comprising three sub-processes: business capacity management, service capacity management, and component capacity management.

Since capacity can change due to changing conditions or external influences including seasonal demand, industry changes, and unexpected macroeconomic events companies must remain nimble enough to constantly meet expectations in a cost-effective manner. For example, raw material resources may need to be adjusted, depending on demand and the business’s current on-hand inventory.

Implementing capacity management may entail working overtime, outsourcing business operations, purchasing additional equipment, and leasing or selling commercial property.

Companies that poorly execute capacity management may experience diminished revenues due to unfulfilled orders, customer attrition, and decreased market share. As such, a company that rolls out an innovative new product with an aggressive marketing campaign must commensurately plan for a sudden spike in demand. The inability to replenish a retail partner’s inventory in a timely manner is bad for business.

Businesses thus face inherent challenges in their attempts to produce at capacity while minimizing production costs. For instance, a company may lack the requisite time and personnel needed to conduct adequate quality control inspections on its products or services. Furthermore, machinery might break down due to overuse and employees may suffer stress, fatigue, and diminished morale if pushed too hard.

Capacity management also means calculating the proportion of spacial capacity that is actually being used over a certain time period. Consider a company operating at a maximum capacity that houses 500 employees across three floors of an office building. If that company downsizes by reducing the number of employees to 300, it will then be operating at 60% capacity (300 / 500 = 60%). But given that 40% of its office space is left unused, the firm is spending more on per-unit cost than before.

Consequently, the company might decide to allocate its labor resources to only two floors and cease leasing the unused floor in a proactive effort to reduce expenditures on rent, insurance, and utility costs associated with the empty space.

Capacity management is concerned with:

  • Monitoring the performance and throughput or load on a server, server farm, or property.
  • Performance analysis of measurement data, including analysis of the impact of new releases on capacity.
  • Performance tuning of activities to ensure the most efficient use of existing infrastructure
  • Understanding the demands on the service and future plans for workload growth (or shrinkage).
  • Influences on demand for computing resources.
  • Capacity planning of storage, computer hardware, software and connection infrastructure resources required over some future period of time.

Factors affecting network performance

Not all networks are the same. As data is broken into component parts (often known frames, packets, or segments) for transmission, several factors can affect their delivery.

  • Delay: It can take a long time for a packet to be delivered across intervening networks. In reliable protocols where a receiver acknowledges delivery of each chunk of data, it is possible to measure this as round-trip time.
  • Packet loss: In some cases, intermediate devices in a network will lose packets. This may be due to errors, to overloading of the intermediate network, or to the intentional discarding of traffic in order to enforce a particular service level.
  • Retransmission: When packets are lost in a reliable network, they are retransmitted. This incurs two delays: First, the delay from re-sending the data; and second, the delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack.
  • Throughput: The amount of traffic a network can carry is measured as throughput, usually in terms such as kilobits per second. Throughput is analogous to the number of lanes on a highway, whereas latency is analogous to its speed limit.

Capacity Limitations

It is important to understand your capacity limitations so that you can identify areas of improvement and develop a capacity plan that is just right for your organization.

Many factors contribute and detract from the available capacity. These include the quality and quantity of labor, machine availability, waste levels, government regulations, required machine maintenance, and other external factors.

Physical distancing requirements reduced the total available capacity for many manufacturers, leading to a decreased output. Some of these companies chose to add overtime capacity, while others chose to outsource some of their operations or even added automation to increase the output of their production lines.

The prevailing theme for businesses that thrived during the pandemic had one thing in common agility. These companies were able to quickly identify the effects of losing capacity in order to make the right choices to meet their business goals quickly and efficiently.

Capacity Analysis

The first step to take when you are constantly operating under constrained capacity is to identify the bottleneck.

A capacity bottleneck is a process or operation that has limited capacity and reduces the capacity of the entire production plant.

Bottlenecks cause delays in production, too much work-in-process items, and can be costly to the company. Identifying capacity bottlenecks can help identify the real cause of the problem and develop a plan to resolve it.

There are many ways to increase resource capacity within your facility:

  • Purchase another machine (best for inexpensive resources, if possible).
  • Perform regular maintenance on machines to increase their efficiency.
  • Hire another employee.
  • Re-allocate existing capacity to increase the capacity of the bottleneck operation.
  • Invest in employee training.
  • Optimize your production schedule to reduce sequence-dependent setups.

Theory of constraints

The theory of constraints (TOC) is a management paradigm that views any manageable system as being limited in achieving more of its goals by a very small number of constraints. There is always at least one constraint, and TOC uses a focusing process to identify the constraint and restructure the rest of the organization around it. TOC adopts the common idiom “a chain is no stronger than its weakest link”. That means that organizations and processes are vulnerable because the weakest person or part can always damage or break them, or at least adversely affect the outcome.

The Theory of Constraints provides a powerful set of tools for helping to achieve that goal, including:

  • The Five Focusing Steps: A methodology for identifying and eliminating constraints
  • The Thinking Processes: Tools for analyzing and resolving problems
  • Throughput Accounting: A method for measuring performance and guiding management decisions

The five focusing steps

Theory of constraints is based on the premise that the rate of goal achievement by a goal-oriented system (i.e., the system’s throughput) is limited by at least one constraint.

The argument by reductio ad absurdum is as follows: If there was nothing preventing a system from achieving higher throughput (i.e., more goal units in a unit of time), its throughput would be infinite which is impossible in a real-life system.

Only by increasing flow through the constraint can overall throughput be increased.

Assuming the goal of a system has been articulated and its measurements defined, the steps are:

  • Identify the system’s constraints.
  • Decide how to exploit the system’s constraints.
  • Subordinate everything else to the above decisions.
  • Alleviate the system’s constraints.
  • Warning! If in the previous steps a constraint has been broken, go back to step 1, but do not allow inertia to cause a system’s constraint.

Constraints

A constraint is anything that prevents the system from achieving its goal. There are many ways that constraints can show up, but a core principle within TOC is that there are not tens or hundreds of constraints. There is at least one, but at most only a few in any given system. Constraints can be internal or external to the system. An internal constraint is in evidence when the market demands more from the system than it can deliver. If this is the case, then the focus of the organization should be on discovering that constraint and following the five focusing steps to open it up (and potentially remove it). An external constraint exists when the system can produce more than the market will bear. If this is the case, then the organization should focus on mechanisms to create more demand for its products or services.

Types of (internal) constraints

  • People: Lack of skilled people limits the system. Mental models held by people can cause behaviour that becomes a constraint.
  • Equipment: The way equipment is currently used limits the ability of the system to produce more salable goods/services.
  • Policy: A written or unwritten policy prevents the system from making more.

Plant types

There are four primary types of plants in the TOC lexicon. Draw the flow of material from the bottom of a page to the top, and you get the four types. They specify the general flow of materials through a system, and also provide some hints about where to look for typical problems. This type of analysis is known as VATI analysis as it uses the bottom-up shapes of the letters V, A, T, and I to describe the types of plants. The four types can be combined in many ways in larger facilities, e.g. “an A plant feeding a V plant”.

  • V-plant: The general flow of material is one-to-many, such as a plant that takes one raw material and can make many final products. Classic examples are meat rendering plants or a steel manufacturer. The primary problem in V-plants is “robbing,” where one operation (A) immediately after a diverging point “steals” materials meant for the other operation (B). Once the material has been processed by A, it cannot come back and be run through B without significant rework.
  • A-plant: The general flow of material is many-to-one, such as in a plant where many sub-assemblies converge for a final assembly. The primary problem in A-plants is in synchronizing the converging lines so that each supplies the final assembly point at the right time.
  • T-plant: The general flow is that of an I-plant (or has multiple lines), which then splits into many assemblies (many-to-many). Most manufactured parts are used in multiple assemblies and nearly all assemblies use multiple parts. Customized devices, such as computers, are good examples. T-plants suffer from both synchronization problems of A-plants (parts aren’t all available for an assembly) and the robbing problems of V-plants (one assembly steals parts that could have been used in another).
  • I-plant: Material flows in a sequence, such as in an assembly line. The primary work is done in a straight sequence of events (one-to-one). The constraint is the slowest operation.

Applications

The focusing steps, this process of ongoing improvement, have been applied to manufacturing, project management, supply chain/distribution generated specific solutions. Other tools (mainly the “thinking process“) also led to TOC applications in the fields of marketing and sales, and finance.

A successful Theory of Constraints implementation will have the following benefits:

  • Fast Improvement: a result of focusing all attention on one critical area; the system constraint.
  • Increased Profit: the primary goal of TOC for most companies.
  • Improved Capacity: optimizing the constraint enables more product to be manufactured.
  • Reduced Inventory: eliminating bottlenecks means there will be less work-in-process.
  • Reduced Lead Times: optimizing the constraint results in smoother and faster product flow.
error: Content is protected !!