Types of reports, Footnotes and Bibliography

Report writing is a formal style of writing elaborately on a topic. The tone of a report is always formal. The important section to focus on is the target audience. For example, report writing about a school event, report writing about a business case, etc.

A report is a document that presents information in an organized format for a specific audience and purpose. Although summaries of reports may be delivered orally, complete reports are almost always in the form of written documents.

Reports are written with much analysis. The purpose of report writing is essential to inform the reader about a topic, minus one’s opinion on the topic. It’s simply a portrayal of facts, as it is. Even if one gives inferences, solid analysis, charts, tables and data is provided. Mostly it is specified by the person who’s asked for the report whether they would like your take or not if that is the case.

Types of Reports

Short and Long Report Reports:

These kinds of reports are quite clear, as the name suggests. A two-page report or sometimes referred to as a memorandum is short, and a thirty-page report is absolutely long. But what makes a clear division of short reports or long reports? Well, usually, notice that longer reports are generally written in a formal manner.

Functional Reports:

This classification includes accounting reports, marketing reports, financial reports, and a variety of other reports that take their designation from the ultimate use of the report. Almost all reports could be included in most of these categories. And a single report could be included in several classifications.

Although authorities have not agreed on a universal report classification, these report categories are in common use and provide a nomenclature for the study (and use) of reports. Reports are also classified on the basis of their format. As you read the classification structure described below, bear in mind that it overlaps with the classification pattern described above.

  • Preprinted Form:

Basically for “fill in the blank” reports. Most are relatively short (five or fewer pages) and deal with routine information, mainly numerical information. Use this format when it is requested by the person authorizing the report.

  • Letter:

Common for reports of five or fewer pages that are directed to outsiders. These reports include all the normal parts of a letter, but they may also have headings, footnotes, tables, and figures. Personal pronouns are used in this type of report.

  • Memo:

Common for short (fewer than ten pages) informal reports distributed within an organization. The memo format of “Date,” “To,” “From,” and “Subject” is used. Like longer reports, they often have internal headings and sometimes have visual aids. Memos exceeding ten pages are sometimes referred to as memo reports to distinguish them from shorter ones.

  • Manuscript:

Common for reports that run from a few pages to several hundred pages and require a formal approach. As their length increases, reports in manuscript format require more elements before and after the text of the report. Now that we have surveyed the different types of reports and become familiar with the nomenclature, let us move on to the actual process of writing the report.

Periodic Reports:

Periodic reports are issued on regularly scheduled dates. They are generally upward directed and serve management control. Preprinted forms and computer-generated data contribute to uniformity of periodic reports.

Internal or External Reports:

Internal reports travel within the organization. External reports, such as annual reports of companies, are prepared for distribution outside the organization.

Lateral or Vertical Reports:

This classification refers to the direction a report travels. Reports that more upward or downward the hierarchy are referred to as vertical reports; such reports contribute to management control. Lateral reports, on the other hand, assist in coordination in the organization. A report traveling between units of the same organization level (production and finance departments) is lateral.

Proposal Report:

The proposal is a variation of problem-solving reports. A proposal is a document prepared to describe how one organization can meet the needs of another. Most governmental agencies advertise their needs by issuing “requests for proposal” or RFPs. The RFP specifies a need and potential suppliers prepare proposal reports telling how they can meet that need.

Analytical or Informational Reports:

Informational reports (annual reports, monthly financial reports, and reports on personnel absenteeism) carry objective information from one area of an organization to another. Analytical reports (scientific research, feasibility reports, and real-estate appraisals) present attempts to solve problems.

Informal or Formal Reports:

Formal reports are carefully structured; they stress objectivity and organization, contain much detail, and are written in a style that tends to eliminate such elements as personal pronouns. Informal reports are usually short messages with natural, casual use of language. The internal memorandum can generally be described as an informal report.

Footnotes

Footnotes are notes placed at the bottom of a page. They cite references or comment on a designated part of the text above it. For example, say you want to add an interesting comment to a sentence you have written, but the comment is not directly related to the argument of your paragraph. In this case, you could add the symbol for a footnote.

Importance of research paper footnotes

  • Footnotes indicate the authenticity, originality and relevance of the research data.
  • Footnotes give the reader an insight into the research undertaken by the writer and can enables them to further refer to the cited sources for more information.
  • Research paper footnotes are important and helpful in supporting a particular claim maid in a text of a paper.
  • Footnotes also illustrate to the tutor the extensiveness and the extent of research carried out by the writer.
  • It is through the footnote citations that a tutor gets to assess the knowledge, skills and research abilities of a student.
  • Footnotes have the same relevance as a research paper bibliography page. Both of these are vital parts of any research paper as it helps the writer’s form being charged with plagiarism.

Bibliography

A bibliography is a list of works (such as books and articles) written on a particular subject or by a particular author. Adjective: bibliographic.

Also known as a list of works cited, a bibliography may appear at the end of a book, report, online presentation, or research paper.

A bibliography is a list of all of the sources you have used (whether referenced or not) in the process of researching your work. In general, a bibliography should include:

  • The authors’ names
  • The titles of the works
  • The names and locations of the companies that published your copies of the sources
  • The dates your copies were published
  • The page numbers of your sources (if they are part of multi-source volumes)

Analysis of Data: Meaning, Purpose and Types

Data analysis is the systematic approach of refining, converting, and shaping data to uncover valuable insights that facilitate informed business decision-making. The primary aim of data analysis is to extract pertinent information from the data and utilize it as a basis for making well-informed decisions.

Data analysis is a process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today’s business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.

Whether your business is experiencing stagnation or growth, it is essential to reflect on past decisions and learn from any mistakes made. By acknowledging these missteps, you can create a new, improved plan that avoids repeating those errors.

Even if your business is currently growing, it is crucial to maintain a forward-looking perspective to drive further expansion. Regularly analyzing your business data and processes can provide valuable insights for future development.

In both scenarios, the key lies in understanding your business’s strengths and weaknesses, identifying opportunities for improvement, and implementing strategic changes. Continuous analysis and adaptation are fundamental to sustaining growth and ensuring long-term success in today’s dynamic business landscape.

Techniques and Methods

Data analysis techniques and methods play a crucial role in understanding business trends and making informed decisions. Below are the different types of data analysis techniques and their applications:

Text Analysis (Data Mining):

This technique involves discovering patterns in large data sets using databases or data mining tools. It transforms raw data into valuable business information, enabling strategic decision-making using Business Intelligence tools.

Statistical Analysis:

This analysis answers the question “What happened?” by using past data in the form of dashboards. It includes data collection, analysis, interpretation, presentation, and modeling. Statistical Analysis can be categorized into Descriptive Analysis and Inferential Analysis.

  • Descriptive Analysis: Examines complete data or summarized numerical data to show mean, deviation for continuous data, and percentage, frequency for categorical data.

  • Inferential Analysis: Analyzes samples from complete data, drawing different conclusions based on different samples.

Diagnostic Analysis:

This analysis aims to identify the causes behind the insights found in Statistical Analysis. It helps in understanding data behavior patterns and can be useful in solving new problems with similar patterns.

Predictive Analysis:

Predictive Analysis answers the question “What is likely to happen?” by using past data to make predictions about future outcomes. It involves forecasting and relies on detailed information and analysis to improve accuracy.

Prescriptive Analysis:

This type of analysis combines insights from previous analyses to determine the best course of action for current problems or decisions. It goes beyond predictive and descriptive analysis to improve overall data performance and decision-making.

By employing these various data analysis techniques, businesses can gain valuable insights from their data and use them to make informed decisions, optimize processes, and drive growth. Each technique serves a specific purpose and complements others in providing a comprehensive understanding of the data and its implications.

Data analysis is a big subject and can include some of these steps:

  • Defining Objectives: Start by outlining some clearly defined objectives. To get the best results out of the data, the objectives should be crystal clear.
  • Posing Questions: Figure out the questions you would like answered by the data. For example, do red sports cars get into accidents more often than others? Figure out which data analysis tools will get the best result for your question.
  • Data Collection: Collect data that is useful to answer the questions. In this example, data might be collected from a variety of sources like DMV or police accident reports, insurance claims and hospitalization details.
  • Data Scrubbing: Raw data may be collected in several different formats, with lots of junk values and clutter. The data is cleaned and converted so that data analysis tools can import it. It’s not a glamorous step but it’s very important.
  • Data Analysis: Import this new clean data into the data analysis tools. These tools allow you to explore the data, find patterns, and answer what-if questions. This is the payoff; this is where you find results!
  • Drawing Conclusions and Making Predictions: Draw conclusions from your data. These conclusions may be summarized in a report, visual, or both to get the right results.

Coding: Meaning and essentials

The process of identifying and classifying each answer with a numerical score or other character symbol. The numerical score or symbol is called a code, and serves as a rule for interpreting, classifying, and recording data.  Identifying responses with codes is necessary if data is to be processed by computer.

Coded data is often stored electronically in the form of a data matrix – a rectangular arrangement of the data into rows (representing cases) and columns (representing variables) The data matrix is organized into fields, records, and files:

Field: A collection of characters that represents a single type of data.

Record: A collection of related fields, i.e., fields related to the same case (or respondent).

File: A collection of related records, i.e. records related to the same sample.

Tabular Representation of Data

Presentation of data is of utter importance nowadays. After all everything that’s pleasing to our eyes never fails to grab our attention. Presentation of data refers to an exhibition or putting up data in an attractive and useful manner such that it can be easily interpreted.

Tabular Representation

A table facilitates representation of even large amounts of data in an attractive, easy to read and organized manner. The data is organized in rows and columns. This is one of the most widely used forms of presentation of data since data tables are easy to construct and read.

Components of Data Tables

  • Table Number: Each table should have a specific table number for ease of access and locating. This number can be readily mentioned anywhere which serves as a reference and leads us directly to the data mentioned in that particular table.
  • Title: A table must contain a title that clearly tells the readers about the data it contains, time period of study, place of study and the nature of classification of data.
  • Headnotes: A headnote further aids in the purpose of a title and displays more information about the table. Generally, headnotes present the units of data in brackets at the end of a table title.
  • Stubs: These are titles of the rows in a table. Thus a stub display information about the data contained in a particular row.
  • Caption: A caption is the title of a column in the data table. In fact, it is a counterpart if a stub and indicates the information contained in a column.
  • Body or field: The body of a table is the content of a table in its entirety. Each item in a body is known as a ‘cell’.
  • Footnotes: Footnotes are rarely used. In effect, they supplement the title of a table if required.
  • Source: When using data obtained from a secondary source, this source has to be mentioned below the footnote.

Construction of Data Tables

There are many ways for construction of a good table. However, some basic ideas are:

  • The title should be in accordance with the objective of study: The title of a table should provide a quick insight into the table.
  • Comparison: If there might arise a need to compare any two rows or columns then these might be kept close to each other.
  • Alternative location of stubs: If the rows in a data table are lengthy, then the stubs can be placed on the right-hand side of the table.
  • Headings: Headings should be written in a singular form. For example, ‘good’ must be used instead of ‘goods’.
  • Footnote: A footnote should be given only if needed.
  • Size of columns: Size of columns must be uniform and symmetrical.
  • Use of abbreviations: Headings and sub-headings should be free of abbreviations.
  • Units: There should be a clear specification of units above the columns.

The Advantages of Tabular Representation

  • Ease of representation: A large amount of data can be easily confined in a data table. Evidently, it is the simplest form of data presentation.
  • Ease of analysis: Data tables are frequently used for statistical analysis like calculation of central tendency, dispersion etc.
  • Helps in comparison: In a data table, the rows and columns which are required to be compared can be placed next to each other. To point out, this facilitates comparison as it becomes easy to compare each value.
  • Economical: Construction of a data table is fairly easy and presents the data in a manner which is really easy on the eyes of a reader. Moreover, it saves time as well as space.

Processing of Data: Editing field and office editing

Data editing is defined as the process involving the review and adjustment of collected survey data. Data editing helps define guidelines that will reduce potential bias and ensure consistent estimates leading to a clear analysis of the data set by correct inconsistent data using the methods later in this article. The purpose is to control the quality of the collected data. Data editing can be performed manually, with the assistance of a computer or a combination of both.

Data analysis is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, while being used in different business, science, and social science domains. In today’s business, data analysis is playing a role in making decisions more scientific and helping the business achieve effective operation.

EDITING is the process of checking and adjusting responses in the completed questionnaires for omissions, legibility, and consistency and readying them for coding and storage.

Purpose of Editing

Purpose of Editing For consistency between and among responses. For completeness in responses– to reduce effects of item non-response. To better utilize questions answered out of order. To facilitate the coding process.

Basic Principles of Editing

  1. Checking of the no. of Schedules / Questionnaire)
  2. Completeness (Completed in filling of questions)
  3. Legibility.
  4. To avoid Inconstancies in answers.
  5. To Maintain Degree of Uniformity.
  6. To Eliminate Irrelevant Responses.

Types of Editing

  1. Field Editing

Preliminary editing by a field supervisor on the same day as the interview to catch technical omissions, check legibility of handwriting, and clarify responses that are logically or conceptually inconsistent.

Field editing is the preliminary editing of data by a field supervisor on the same day as the interview. Its purpose is to identify technical omissions, check legibility, and clarify responses that are logically or conceptually inconsistent.

When gaps are present from interviews, a call-back should be made rather than guessing what the respondent “would have probably said.”

A second important task of the supervisor is to re-interview a few respondents, at least on some pre-selected questions, as a validity check. In central or in-house editing, all the questionnaires undergo thorough editing. It is a rigorous job performed by central office staff.

  1. Office Editing

Editing performed by a central office staff; often done more rigorously than field editing.

Interactive editing

The term interactive editing is commonly used for modern computer-assisted manual editing. Most interactive data editing tools applied at National Statistical Institutes (NSIs) allow one to check the specified edits during or after data entry, and if necessary, to correct erroneous data immediately. Several approaches can be followed to correct erroneous data:

  • Re-contact the respondent
  • Compare the respondent’s data to his data from the previous year
  • Compare the respondent’s data to data from similar respondents
  • Use the subject matter knowledge of the human editor

Selective editing

Selective editing is an umbrella term for several methods to identify the influential errors, and outliers. Selective editing techniques aim to apply interactive editing to a well-chosen subset of the records, such that the limited time and resources available for interactive editing are allocated to those records where it has the most effect on the quality of the final estimates of published figures. In selective editing, data is split into two streams:

  • The critical stream
  • The non-critical stream

Significance of Processing of Data

Data processing is the conversion of data into usable and desired form. This conversion or “processing” is carried out using a predefined sequence of operations either manually or automatically. Most of the processing is done by using computers and other data processing devices, and thus done automatically. The output or “processed” data can be obtained in various forms. Example of these forms include image, graph, table, vector file, audio, charts or any other desired format. The form obtained depends on the software or method used. When done itself it is referred to as automatic data processing. Data centers are the key component as it enables processing, storage, access, sharing and analysis of data.

Importance of data processing includes increased productivity and profits, better decisions, more accurate and reliable. Further cost reduction, ease in storage, distributing and report making followed by better analysis and presentation are other advantages. The need to process data is now widely realized and reflected in every field of work. Let the work be done in a business atmosphere or for educational research purpose, data management systems are used by every business. It is a multidimensional process which is involved in almost every field of human life. Generally speaking, the term “Data Processing” is used where you have to collect innumerable data files from different sources.

Methods of Data Processing

There are number of methods and types of data processing. Based on the data processing system and the requirement of the project, suitable data processing methods can be used. Generally, Organizations employ computer systems to carry out a series of operations on the data to present, interpret, or to obtain information. The process includes activities like data entry, summary, calculation, storage, etc. A useful and informative output is presented in various appropriate forms such as diagrams, reports, graphics, etc. Data processing is  mainly  important in business and scientific operations. Business data is repeatedly processed, and usually needs large volumes of output. Scientific data requires numerous computations and usually needs fast-generating outputs. Three methods of data processing have been presented below:

Manual Data Processing

Data is processed manually without using any machine or tool to get the required results. In manual data processing, all the calculations and logical operations are performed manually on the data. Similarly, data is transferred manually from one place to another. This method of data processing is very slow, and errors may also occur in the output. Mostly, Data is processed manually in many small business firms as well as government offices & institutions. In an educational institute, for example, marks sheets, fee receipts, and other financial calculations (or transactions) are performed by hand.

This method is avoided as far as possible because of the very high probability of error, labour intensive and very time-consuming. This type of data processing forms the very primitive stage when technology was not available, or it was not affordable. With the advancement of technology, the dependency on manual methods has drastically decreased. This also makes processing expensive and requires large manpower depending on the data required to be processed. Example includes selling of commodity on shop.

Mechanical Data Processing

In this method, data is processed by using different devices like typewriters, mechanical printers or other mechanical devices. This method of data processing is faster and more accurate than manual data processing. These are faster than the manual mode but still forms the early stages of data processing. With invention and evolution of more complex machines with better computing power this type of processing also started fading away. Examination boards and printing press use mechanical data processing devices frequently. Any device which facilitates data processing can be considered under this category. The output from this method is still very limited.

Electronic Data Processing

This is a modern technique to process data. The data is processed through a computer; Data and set of instructions are given to the computer as input, and the computer automatically processes the data according to the given set of instructions. The computer is also known as Electronic Data Processing Machine. Electronic Data Processing is the fastest and best available method with highest reliability and accuracy. Technology used is the latest as this method uses computers. Manpower required is minimal. Processing can be done through various programs and predefined set of rules. Processing of large amount of data with high accuracy is almost impossible which makes it best among the available types of data processing. For example, in a computerized education environment results of students are prepared through a computer; in banks, accounts of customers are maintained (or processed) through computers, etc.

Applications of Data Processing

  • Data Analysis: In a science or engineering field, the terms data processing and information systems are considered too broad, and the more specialized term data analysis is typically used. Data analysis makes use of specialized and highly accurate algorithms and statistical calculations that are less often observed in the typical general business environment.
  • Commercial Data Processing: Commercial data processing involves a large volume of input data, relatively few computational operations, and a large volume of output. For example, an insurance company needs to keep records on tens or hundreds of thousands of policies, print and mail bills, and receive and post payments.
  • Almost all fields: It is impossible to think of any area which is untouched by data processing or its use. Let it be agriculture, manufacturing or service industry, meteorological department, urban planning, transportation systems, banking and educational institutions. It is required at all places with varied level of complexity.
  • Real World Applications: With the implementation of proper security algorithms and protocols, it can be ensured that the inputs and the processed information is safe and stored securely without unauthorized access or changes. With properly processed data, researchers can write scholarly materials and use them for educational purposes. The same can be applied for evaluation of economic and such areas and factors. Healthcare industry retrieves information quickly of information and even save lives. Apart from that, illness details and records of treatment techniques can make it less time-consuming for finding solutions and help in reducing the suffering of the patients.

Types of Data Processing

There are number of methods and techniques which can be adopted for processing of data depending upon the requirements, time availability, software and hardware capability of the technology being used for data processing. There are number of types of data processing methods.

Batch Processing

This is one of the widely used type of data processing which is also known as Serial/Sequential, Tacked/Queued  offline processing. The fundamental of this type of processing is that different jobs of different users are processed in the order received. Once the stacking of jobs is complete they are provided/sent for processing while maintaining the same order. This processing of a large volume of data helps in reducing the processing cost thus making it data processing economical. Batch Processing is a method where the information to be organized is sorted into groups to allow for efficient and sequential processing.

Online Processing is a method that utilizes Internet connections and equipment directly attached to a computer. It is used mainly for information recording and research. Real-Time Processing is a technique that can respond almost immediately to various signals to acquire and process information. Distributed Processing is commonly utilized by remote workstations connected to one big central workstation or server. ATMs are good examples of this data processing method. Examples include: Examination, payroll and billing system.

Real time processing

As the name suggests this method is used for carrying out real-time processing. This is required where the results are displayed immediately or in lowest time possible. The data fed to the software is used almost instantaneously for processing purpose. The nature of processing of this type of data processing requires use of internet connection and data is stored/used online. No lag is expected/acceptable in this type and receiving and processing of transaction is carried out simultaneously. This method is costly than batch processing as the hardware and software capabilities are better. Example includes banking system, tickets booking for flights, trains, movie tickets, rental agencies etc. This technique can respond almost immediately to various signals to acquire and process information. These involve high maintenance and upfront cost attributed to very advanced technology and computing power. Time saved is maximum in this case as the output is seen in real time. For example in banking transactions.

Online Processing

This processing method is a part of automatic processing method. This method at times known as direct or random-access processing. Under this method the job received by the system is processed at same time of receiving. This can be considered and often mixed with real-time processing. This system features random and rapid input of transaction and user defined/ demanded direct access to databases/content when needed. This is a method that utilizes Internet connections and equipment directly attached to a computer. This allows the data to be stored in one place and being used at an altogether different place. Cloud computing can be considered as an example which uses this type of processing. It is used mainly for information recording and research.

Distributed Processing

This method is commonly utilized by remote workstations connected to one big central workstation or server. ATMs are good examples of this data processing method. All the end machines run on a fixed software located at a particular place and make use of exactly same information and sets of instruction.

Multiprocessing

This type of processing perhaps the most widely used types of data processing. It is used almost everywhere and forms the basis of all computing devices relying on processors. Multi-processing makes use of CPUs (more than one CPU). The task or sets of operations are divided between CPUs available simultaneously thus increasing efficiency and throughput. The break down of jobs which needs be performed are sent to different CPUs working parallel within the mainframe. The result and benefit of this type of processing is the reduction in time required and increasing the output. Moreover, CPUs work independently as they are not dependent on other CPU, failure of one CPU does not result in halting the complete process as the other CPUs continue to work. Examples include processing of data and instructions in computer, laptops, mobile phones etc.

Time sharing

Time based used of CPU is the core of this data processing type. The single CPU is used by multiple users. All users share same CPU but the time allocated to all users might differ. The processing takes place at different intervals for different users as per allocated time. Since multiple users can uses this type it is also referred as multi access system. This is done by providing a terminal for their link to main CPU and the time available is calculated by dividing the CPU time between all the available users as scheduled.

Dichotomous, Multiple type Questions in Survey

Dichotomous

The dichotomous question is a question that can have two possible answers. Dichotomous questions are usually used in a survey that asks for a Yes/No, True/False, Fair/Unfair or Agree/Disagree answers. They are used for a clear distinction of qualities, experiences, or respondent’s opinions.

If you want information only about product users, you may want to ask this type of question to “opt-out” those who haven’t bought your products or services. It is important that you ask this type of question if there are only two possible answers. Avoid using a dichotomous question to inquire about feelings and emotions as it is a neutral area where people would prefer to answer “maybe,” or “occasionally”.

Dichotomous questions (Yes/No) may seem simple, but they have few problems both on the part of the survey respondent and in terms of analysis. Yes/No questions often force customers to choose between options that may not be that simple and may lead to a customer deciding on an option that doesn’t truly capture their feelings.

The benefits of dichotomous questions are that they are easy and short. Also, you can simplify the survey experience. Dichotomous questions have the advantage to ease responses and ease the analysis of the data.

Multiple type Questions

Survey questions can use either a closed-ended or open-ended format to collect answers from individuals. And you can use them to gather feedback from a host of different audiences, including your customers, colleagues, prospects, friends, and family.

Multiple choice questions are the most popular survey question type. They allow your respondents to select one or more options from a list of answers that you define. They’re intuitive, easy to use in different ways, help produce easy-to-analyze data, and provide mutually exclusive choices. Because the answer options are fixed, your respondents have an easier survey-taking experience.

Perhaps, most important, you’ll get structured survey responses that produce clean data for analysis.

The most basic variation is the single-answer multiple choice question. Single answer questions use a radio button (circle buttons representing options in a list) format to allow respondents to click only one answer. They work well for binary questions, questions with ratings, or nominal scales.

Advantages of Multiple Choice Questions

  • They are less complicated and less time consuming:

Imagine the pain a respondent goes through while having to type in answers when they can simply answer the questions at the click of a button. Here is where multiple choice lessens the complications.

Many-a-times the survey creator would want to ask straightforward questions to the respondent, the best practice is to provide the choices instead of them coming up with answers, this in-turn saves their valuable time.

  • Responses get a specific structure and are easy to analyz:

Surveys are often developed with respondents in mind, how will they answer the questions? This is where multiple choice gives a specific structure to responses, therefore becomes the best choice.

Let’s say at your workplace you receive a survey asking about the best restaurant, to host the Christmas party. Honestly speaking giving specific options isn’t going to hurt, rather, as a surveyor, you are sure that the answer will be from one of the options given to the respondents.

It will be easier for the surveyor to analyze the data as it will be free from any errors (as respondents won’t be typing in answers) and the surveyor would atleast know that not a random restaurant would be chosen.

  • Helps respondent comprehend how they should answer:

One of the positives of multiple choice options is that they help respondents understand how they should answer. In this manner, the surveyor can choose how generalist or specific the responses need to be.

At all times, the surveyor needs to be careful on the choice of question in order to be able to receive responses that are easy to analyze.

  • They appear to look good on handheld devices:

It is estimated that 1 out of 5 people take surveys on handheld devices like mobile phones or tablets. Considering the fact that there is no mouse or keyboard to use, multiple choice questions make it easier for the respondent to choose as there is no scrolling involved.

Disguised and Undisguised Observation Research

Disguised Observation is a technique employed, often in product testing, where a respondent or groups of respondents are unaware that they are being observed.

Participate observation is characterized as either undisguised or disguised. In undisguised observation, the observed individuals know that the observer is present for the purpose of collecting info about their behavior. This technique is often used to understand the culture and behavior of groups or individuals. In contrast, in disguised observation, the observed individuals do not know that they are being observed. This technique is often used when researchers believe that the individuals under observation may change their behavior as a result of knowing that they were being recorded.

For a great example of disguised research, see the Rosenhan experiment in which several researchers seek admission to twelve different mental hospitals to observe patient-staff interactions and patient diagnosing and releasing procedures. There are several benefits to doing participant observation. Firstly, participant research allows researchers to observe behaviors and situations that are not usually open to scientific observation. Furthermore, participant research allows the observer to have the same experiences as the people under study, which may provide important insights and understandings of individuals or groups.

However, there are also several drawbacks to doing participant observation. Firstly, participant observers may sometimes lose their objectivity as a result of participating in the study. This usually happens when observers begin to identify with the individuals under study, and this threat generally increases as the degree of observer participation increases. Secondly, participant observers may unduly influence the individuals whose behavior they are recording.

This effect is not easily assessed, however, it generally more prominent when the group being observed is small, or if the activities of the participant observer are prominent. Lastly, disguised observation raises some ethical issues regarding obtaining information without respondents’ knowledge.

For example, the observations collected by an observer participating in an internet chat room discussing how racists advocate racial violence may be seen as incriminating evidence collected without the respondents’ knowledge. The dilemma here is of course that if informed consent were obtained from participants, respondents would likely choose not to cooperate.

Experimental: Field, Laboratory

Field

They randomly assign subjects (or other sampling units) to either treatment or control groups in order to test claims of causal relationships. Random assignment helps establish the comparability of the treatment and control group, so that any differences between them that emerge after the treatment has been administered plausibly reflect the influence of the treatment rather than pre-existing differences between the groups. The distinguishing characteristics of field experiments are that they are conducted real-world settings and often unobtrusively. This is in contrast to laboratory experiments, which enforce scientific control by testing a hypothesis in the artificial and highly controlled setting of a laboratory. Field experiments have some contextual differences as well from naturally-occurring experiments and quasi-experiments. While naturally-occurring experiments rely on an external force (e.g. a government, nonprofit, etc.) controlling the randomization treatment assignment and implementation, field experiments require researchers to retain control over randomization and implementation. Quasi-experiments occur when treatments are administered as-if randomly (e.g. U.S. Congressional districts where candidates win with slim-margins, weather patterns, natural disasters, etc.).

Field experiments encompass a broad array of experimental designs, each with varying degrees of generality. Some criteria of generality (e.g. authenticity of treatments, participants, contexts, and outcome measures) refer to the contextual similarities between the subjects in the experimental sample and the rest of the population. They are increasingly used in the social sciences to study the effects of policy-related interventions in domains such as health, education, crime, social welfare, and politics.

Characteristics

Under random assignment, outcomes of field experiments are reflective of the real-world because subjects are assigned to groups based on non-deterministic probabilities. Two other core assumptions underlie the ability of the researcher to collect unbiased potential outcomes: excludability and non-interference. The excludability assumption provides that the only relevant causal agent is through the receipt of the treatment. Asymmetries in assignment, administration or measurement of treatment and control groups violate this assumption.

Limitations

There are limitations of and arguments against using field experiments in place of other research designs (e.g. lab experiments, survey experiments, observational studies, etc.). Given that field experiments necessarily take place in a specific geographic and political setting, there is a concern about extrapolating outcomes to formulate a general theory regarding the population of interest. However, researchers have begun to find strategies to effectively generalize causal effects outside of the sample by comparing the environments of the treated population and external population, accessing information from larger sample size, and accounting and modeling for treatment effects heterogeneity within the sample. Others have used covariate blocking techniques to generalize from field experiment populations to external populations.

Noncompliance issues affecting field experiments (both one-sided and two-sided noncompliance) can occur when subjects who are assigned to a certain group never receive their assigned intervention. Other problems to data collection include attrition (where subjects who are treated do not provide outcome data) which, under certain conditions, will bias the collected data. These problems can lead to imprecise data analysis; however, researchers who use field experiments can use statistical methods in calculating useful information even when these difficulties occur.

Using field experiments can also lead to concerns over interference between subjects. When a treated subject or group affects the outcomes of the nontreated group (through conditions like displacement, communication, contagion etc.), nontreated groups might not have an outcome that is the true untreated outcome. A subset of interference is the spillover effect, which occurs when the treatment of treated groups has an effect on neighboring untreated groups.

Participants are randomly allocated to each independent variable group. An example is Milgram’s experiment on obedience or Loftus and Palmer’s car crash study.

Laboratory

A laboratory experiment is an experiment conducted under highly controlled conditions (not necessarily a laboratory), where accurate measurements are possible.

The researcher decides where the experiment will take place, at what time, with which participants, in what circumstances and using a standardized procedure.

  • Strength: It is easier to replicate (i.e. copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength: They allow for precise control of extraneous and independent variables. This allows a cause and effect relationship to be established.
  • Limitation: The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e. low ecological validity. This means it would not be possible to generalize the findings to a real life setting.
  • Limitation: Demand characteristics or experimenter effects may bias the results and become confounding variables.

Mechanical observations

Human observation is self-explanatory, using human observers to collect data in the study. Mechanical observation involves using various types of machines to collect the data, which is then interpreted by researchers. With continuing improvements in technology, there are many “mechanical” ways of capturing data in observation studies, however, these new “gadgets” tend to be extremely expensive. The most commonly used and least expensive means of mechanically gathering data in an observation study is a video camera. A video camera offers a much more precise means of collecting data than what can simply be recorded by a human observer.

A number of imaginative methods of mechanical observation and device for making such observations have been developed. One of the most widely known devices of this type is the audiometer, a device used by the A C Nielsen Company to record when radio and television sets are turned on and the stations to which they are tuned. The newest generations of this system uses the Storage Instantaneous Audi-meter. This device automatically stores in electronic memory data on television stations tuned in. Nielsen has a central computer that dials these memories on the telephone twice a day and collects the information from them.

a) Voice pitch meters: measures emotional reactions.

b) Electronic checkout scanners: records purchase behavior.

c) Eye-tracking analysis: while subjects watch the advertisement.

Scaling Techniques: Likert Scale, Semantic Differential Scale

Likert Scale

The Likert scale is a five (or seven) point scale which is used to allow the individual to express how much they agree or disagree with a particular statement.

A Likert scale assumes that the strength/intensity of an attitude is linear, i.e. on a continuum from strongly agree to strongly disagree, and makes the assumption that attitudes can be measured.

Strongly Disagree Disagree Undecided Agree Strongly Agree
1 2 3 4 5

Likert Scales have the advantage that they do not expect a simple yes / no answer from the respondent, but rather allow for degrees of opinion, and even no opinion at all.

Therefore, quantitative data is obtained, which means that the data can be analyzed with relative ease.

Offering anonymity on self-administered questionnaires should further reduce social pressure, and thus may likewise reduce social desirability bias.

Semantic Differential Scale

A semantic differential scale is a survey or questionnaire rating scale that asks people to rate a product, company, brand, or any ‘entity’ within the frames of a multi-point rating option. These survey answering options are grammatically on opposite adjectives at each end. For example, love-hate, satisfied-unsatisfied, and likely to return-unlikely to return with intermediate options in between.

Advantages of semantic differential

  • The semantic differential has outdone the other scales like the Likert scale in vitality, rationality, or authenticity.
  • It has an advantage in terms of language too. There are two polar adjectives for the factor to be measured and a scale connecting both these polar.
  • It is more advantageous than a Likert scale. The researcher declares a statement and expects respondents to either agree or disagree with that.
  • Respondents can express their opinions about the matter in hand more accurately and entirely due to the polar options provided in the semantic differential.
  • In other question types like the Likert scale, respondents have to indicate the level of agreement or disagreement with the mentioned topic. The semantic differential scale offers extremely opposite adjectives on each end of the range. The respondents can precisely explain their feedback that researchers use for making accurate judgments from the survey.

Types

  1. Slider rating scale: Questions that feature a graphical slider give the respondent a more interactive way to answer the semantic differential scale question.
  2. Non-slider rating scale: The non-slider question uses typical radio buttons for a more traditional survey look and feel. Respondents are more used to answering.
  3. Open-ended questions: These questions give the users ample freedom to express their emotions about your organization, products, or services.
  4. Ordering: The ordering questions offer the scope to rate the parameters that the respondents feel are best or worst according to their personal experiences.
  5. Satisfaction rating: The easiest and eye-catchy semantic differential scale questions are the satisfaction rating questions.
error: Content is protected !!