Report writing Principles

Proper Format:

An ideal repost is one, which must be prepared as per commonly used format. One must comply with the contemporary practices; completely a new format should not be used.

Proper Language:

Researcher must use a suitable language. Language should be selected as per its target users.

Preciseness:

Research report must not be unnecessarily lengthy. It must contain only necessary parts with adequate description.

Objectivity:

Report must be free from personal bias, i.e., it must be free from one’s personal liking and disliking. The report must be prepared for impersonal needs. The facts must be stated boldly. It must reveal the bitter truth. It must suit the objectives and must meet expectations of the relevant audience/readers.

Cost Consideration:

It must be prepared within the budgeted amount. It should not result into excessive costs.

Selectiveness:

It is important to exclude the matter, which is known to all. Only necessary contents should be included to save time, costs, and energy. However, care should be taken that the vital points should not be missed.

Attractive:

Report must be attractive in all the important regards like size, colour, paper quality, etc. Similarly, it should use liberally the charts, diagrams, figures, illustrations, pictures, and multiple colours.

Reliability:

Research report must be reliable. Manager can trust on it. He can be convinced to decide on the basis of research reports.

Simplicity:

Report must be simple to understand. Unnecessary technical words or terminologies (jargons) should be avoided.

Clarity:

Report must reveal the facts clearly. Contents and conclusions drawn must be free from ambiguities. In short, outcomes must convey clear-cut implications.

Accuracy:

As far as possible, research report must be prepared carefully. It must be free from spelling mistakes and grammatical errors.

Comprehensiveness:

Report must be complete. It must include all the necessary contents. In short, it must contain enough detail to covey meaning.

Reference importance and Writing style

Referencing allows you to acknowledge the contribution of other writers and researchers in your work. Any university assignments that draw on the ideas, words or research of other writers must contain citations.

Referencing in a general sense means to give credit to someone for using his or her own ideas or thoughts in a research activity. Referencing helps in gaining the originality of the ideas and thoughts used in the research activity. Failure to reference is treated as disrespect to the original author or writer and seen as a major misconduct in the area of academic research writing. Generally students made the mistake of not mentioning proper referencing at the end of their research projects, essays or any other piece of work. This may lead to cancellation of the written matter.

Referencing is also a way to give credit to the writers from whom you have borrowed words and ideas. By citing the work of a particular scholar, you acknowledge and respect the intellectual property rights of that researcher. As a student or academic, you can draw on any of the millions of ideas, insights and arguments published by other writers, many of whom have spent years researching and writing. All you need to do is acknowledge their contribution to your assignment.

Referencing is a way to provide evidence to support the assertions and claims in your own assignments. By citing experts in your field, you are showing your marker that you are aware of the field in which you are operating. Your citations map the space of your discipline and allow you to navigate your way through your chosen field of study, in the same way that sailors steer by the stars.

References should always be accurate, allowing your readers to trace the sources of information you have used. The best way to make sure you reference accurately is to keep a record of all the sources you used when reading and researching for an assignment.

Referencing correctly:

  • Shows your understanding of the topic.
  • Helps you to avoid plagiarism by making it clear which ideas are your own and which are someone else’s
  • Allows others to identify the sources you have used.
  • Gives supporting evidence for your ideas, arguments and opinions.

Non-parametric tests applicability

Nonparametric tests are methods of statistical analysis that do not require a distribution to meet the required assumptions to be analyzed (especially if the data is not normally distributed). Due to this reason, they are sometimes referred to as distribution-free tests. Nonparametric tests serve as an alternative to parametric tests such as T-test or ANOVA that can be employed only if the underlying data satisfies certain criteria and assumptions.

Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution’s parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are violated.

Applications and purpose

Non-parametric methods are widely used for studying populations that take on a ranked order (such as movie reviews receiving one to four stars). The use of non-parametric methods may be necessary when data have a ranking but no clear numerical interpretation, such as when assessing preferences. In terms of levels of measurement, non-parametric methods result in ordinal data.

As non-parametric methods make fewer assumptions, their applicability is much wider than the corresponding parametric methods. In particular, they may be applied in situations where less is known about the application in question. Also, due to the reliance on fewer assumptions, non-parametric methods are more robust.

Another justification for the use of non-parametric methods is simplicity. In certain cases, even when the use of parametric methods is justified, non-parametric methods may be easier to use. Due both to this simplicity and to their greater robustness, non-parametric methods are seen by some statisticians as leaving less room for improper use and misunderstanding.

The wider applicability and increased robustness of non-parametric tests comes at a cost: in cases where a parametric test would be appropriate, non-parametric tests have less power. In other words, a larger sample size can be required to draw conclusions with the same degree of confidence.

Methods

Non-parametric (or distribution-free) inferential statistical methods are mathematical procedures for statistical hypothesis testing which, unlike parametric statistics, make no assumptions about the probability distributions of the variables being assessed. The most frequently used tests include

Analysis of similarities

  • Anderson–Darling test: Tests whether a sample is drawn from a given distribution
  • Statistical bootstrap methods: Estimates the accuracy/sampling distribution of a statistic
  • Cochran’s Q: Tests whether k treatments in randomized block designs with 0/1 outcomes have identical effects
  • Cohen’s kappa: Measures inter-rater agreement for categorical items
  • Friedman two-way analysis of variance by ranks: tests whether k treatments in randomized block designs have identical effects
  • Kaplan–Meier: Estimates the survival function from lifetime data, modeling censoring
  • Kendall’s tau: Measures statistical dependence between two variables
  • Kendall’s W: A measure between 0 and 1 of inter-rater agreement
  • Kolmogorov–Smirnov test: Tests whether a sample is drawn from a given distribution, or whether two samples are drawn from the same distribution
  • Kruskal–Wallis one-way analysis of variance by ranks: tests whether > 2 independent samples are drawn from the same distribution
  • Kuiper’s test: Tests whether a sample is drawn from a given distribution, sensitive to cyclic variations such as day of the week
  • Logrank test: Compares survival distributions of two right-skewed, censored samples
  • Mann–Whitney U or Wilcoxon rank sum test: Tests whether two samples are drawn from the same distribution, as compared to a given alternative hypothesis.
  • McNemar’s test: Tests whether, in 2 × 2 contingency tables with a dichotomous trait and matched pairs of subjects, row and column marginal frequencies are equal
  • Median test: Tests whether two samples are drawn from distributions with equal medians
  • Pitman’s permutation test: A statistical significance test that yields exact p values by examining all possible rearrangements of labels
  • Rank products: Detects differentially expressed genes in replicated microarray experiments
  • Siegel–Tukey test: Tests for differences in scale between two groups
  • Sign test: tests whether matched pair samples are drawn from distributions with equal medians
  • Spearman’s rank correlation coefficient: Measures statistical dependence between two variables using a monotonic function
  • Squared ranks test: Tests equality of variances in two or more samples
  • Tukey–Duckworth test: Tests equality of two distributions by using ranks
  • Wald–Wolfowitz runs test: Tests whether the elements of a sequence are mutually independent/random
  • Wilcoxon signed-rank test: Tests whether matched pair samples are drawn from populations with different mean ranks

Reasons to Use Nonparametric Tests

The population sample size is too small

The sample size is an important assumption in selecting the appropriate statistical method. If a sample size is reasonably large, the applicable parametric test can be used. However, if a sample size is too small, it is possible that you may not be able to validate the distribution of the data. Thus, the application of nonparametric tests is the only suitable option.

The underlying data do not meet the assumptions about the population sample

Generally, the application of parametric tests requires various assumptions to be satisfied. For example, the data follows a normal distribution and the population variance is homogeneous. However, some data samples may show skewed distributions.

The skewness makes the parametric tests less powerful because the mean is no longer the best measure of central tendency because it is strongly affected by the extreme values. At the same time, nonparametric tests work well with skewed distributions and distributions that are better represented by the median.

The analyzed data is ordinal or nominal

Unlike parametric tests that can work only with continuous data, nonparametric tests can be applied to other data types such as ordinal or nominal data. For such types of variables, the nonparametric tests are the only appropriate solution.

Data and its types in research

Data can be defined as a systematic record of a particular quantity. It is the different values of that quantity represented together in a set. It is a collection of facts and figures to be used for a specific purpose such as a survey or analysis. When arranged in an organized form, can be called information. The source of data (primary data, secondary data) is also an important factor.

Quantitative Data: These can be measured and not simply observed. They can be numerically represented and calculations can be performed on them. For example, data on the number of students playing different sports from your class gives an estimate of how many of the total students play which sport. This information is numerical and can be classified as quantitative.

Qualitative Data: They represent some characteristics or attributes. They depict descriptions that may be observed but cannot be computed or calculated. For example, data on attributes such as intelligence, honesty, wisdom, cleanliness, and creativity collected using the students of your class a sample would be classified as qualitative. They are more exploratory than conclusive in nature.

Primary Data

It is the data collected by the investigator himself or herself for a specific purpose.

Primary data is an original and unique data, which is directly collected by the researcher from a source according to his requirements.

Data gathered by finding out first-hand the attitudes of a community towards health services, ascertaining the health needs of a community, evaluating a social program, determining the job satisfaction of the employees of an organization, and ascertaining the quality of service provided by a worker are the examples of primary data.

Secondary Data

Data collected by someone else for some other purpose (but being utilized by the investigator for another purpose) is secondary data.

Secondary data refers to the data which has already been collected for a certain purpose and documented somewhere else.

Gathering information with the use of census data to obtain information on the age-sex structure of a population, the use of hospital records to find out the morbidity and mortality patterns of a community, the use of an organization’s records to ascertain its activities, and the collection of data from sources such as articles, journals, magazines, books and periodicals to obtain historical and other types of information, are examples of secondary data.

Discrete Data: These are data that can take only certain specific values rather than a range of values. For example, data on the blood group of a certain population or on their genders is termed as discrete data. A usual way to represent this is by using bar charts.

Continuous Data: These are data that can take values between a certain range with the highest and lowest values. The difference between the highest and lowest value is called the range of data. For example, the age of persons can take values even in decimals or so is the case of the height and weights of the students of your school. These are classified as continuous data. Continuous data can be tabulated in what is called a frequency distribution. They can be graphically represented using histograms.

Confidence interval, Level of Significance

In statistics, a confidence interval (CI) is a type of estimate computed from the observed data. This gives a range of values for an unknown parameter (for example, a population mean). The interval has an associated confidence level that gives the probability with which an estimated interval will contain the true value of the parameter. The confidence level is chosen by the investigator. For a given estimation in a given sample, using a higher confidence level generates a wider (i.e., less precise) confidence interval. In general terms, a confidence interval for an unknown parameter is based on sampling the distribution of a corresponding estimator.

A confidence interval, in statistics, refers to the probability that a population parameter will fall between a set of values for a certain proportion of times.

This means that the confidence level represents the theoretical long-run frequency (i.e., the proportion) of confidence intervals that contain the true value of the unknown population parameter. In other words, 90% of confidence intervals computed at the 90% confidence level contain the parameter, 95% of confidence intervals computed at the 95% confidence level contain the parameter, 99% of confidence intervals computed at the 99% confidence level contain the parameter, etc.

The confidence level is designated before examining the data. Most commonly, a 95% confidence level is used. However, other confidence levels, such as 90% or 99%, are sometimes used.

For example, a confidence interval can be used to describe how reliable survey results are. In a poll of election voting intentions, the result might be that 40% of respondents intend to vote for a certain party. A 99% confidence interval for the proportion in the whole population having the same intention on the survey might be 30% to 50%. From the same data one may calculate a 90% confidence interval, which in this case might be 37% to 43%. A major factor determining the length of a confidence interval is the size of the sample used in the estimation procedure, for example, the number of people taking part in a survey.

Factors affecting the width of the confidence interval include the size of the sample, the confidence level, and the variability in the sample. A larger sample will tend to produce a better estimate of the population parameter, when all other factors are equal. A higher confidence level will tend to produce a broader confidence interval.

Various interpretations of a confidence interval can be given (taking the 90% confidence interval as an example in the following).

The confidence interval can be expressed in terms of samples (or repeated samples): “Were this procedure to be repeated on numerous samples, the fraction of calculated confidence intervals (which would differ for each sample) that encompass the true population parameter would tend toward 90%.”

The confidence interval can be expressed in terms of a single sample: “There is a 90% probability that the calculated confidence interval from some future experiment encompasses the true value of the population parameter.” Note this is a probability statement about the confidence interval, not the population parameter. This considers the probability associated with a confidence interval from a pre-experiment point of view, in the same context in which arguments for the random allocation of treatments to study items are made. Here the experimenter sets out the way in which they intend to calculate a confidence interval and to know, before they do the actual experiment, that the interval they will end up calculating has a particular chance of covering the true but unknown value. This is very similar to the “repeated sample” interpretation above, except that it avoids relying on considering hypothetical repeats of a sampling procedure that may not be repeatable in any meaningful sense. See Neyman construction.

The explanation of a confidence interval can amount to something like: “The confidence interval represents values for the population parameter for which the difference between the parameter and the observed estimate is not statistically significant at the 10% level”. This interpretation is common in scientific articles that use confidence intervals to validate their experiments, although overreliance on confidence intervals can cause problems as well.

The biggest misconception regarding confidence intervals is that they represent the percentage of data from a given sample that falls between the upper and lower bounds. In other words, it would be incorrect to assume that a 99% confidence interval means that 99% of the data in a random sample fall between these bounds. What it actually means is that one can be 99% certain that the range will contain the population mean.

Level of Significance

In statistical hypothesis testing, a result has statistical significance when it is very unlikely to have occurred given the null hypothesis. More precisely, a study’s defined significance level, denoted by, is the probability of the study rejecting the null hypothesis, given that the null hypothesis was assumed to be true; and the p-value of a result, is the probability of obtaining a result at least as extreme, given that the null hypothesis is true. The result is statistically significant, by the standards of the study, when The significance level for a study is chosen before data collection, and is typically set to 5% or much lower depending on the field of study.

In any experiment or observation that involves drawing a sample from a population, there is always the possibility that an observed effect would have occurred due to sampling error alone. But if the p-value of an observed effect is less than (or equal to) the significance level, an investigator may conclude that the effect reflects the characteristics of the whole population, thereby rejecting the null hypothesis.

This technique for testing the statistical significance of results was developed in the early 20th century. The term significance does not imply importance here, and the term statistical significance is not the same as research, theoretical, or practical significance. For example, the term clinical significance refers to the practical importance of a treatment effect.

Statistical significance plays a pivotal role in statistical hypothesis testing. It is used to determine whether the null hypothesis should be rejected or retained. The null hypothesis is the default assumption that nothing happened or changed. For the null hypothesis to be rejected, an observed result has to be statistically significant, i.e. the observed p-value is less than the pre-specified significance level alpha.

To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true. The null hypothesis is rejected if the p-value is less than (or equal to) a predetermined level, alpha. alpha is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error). It is usually set at or below 5%.

For example, when alpha is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%, and a statistically significant result is one where the observed p-value is less than (or equal to) 5%.  When drawing data from a sample, this means that the rejection region comprises 5% of the sampling distribution. These 5% can be allocated to one side of the sampling distribution, as in a one-tailed test, or partitioned to both sides of the distribution, as in a two-tailed test, with each tail (or rejection region) containing 2.5% of the distribution.

The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better. A two-tailed test may still be used but it will be less powerful than a one-tailed test, because the rejection region for a one-tailed test is concentrated on one end of the null distribution and is twice the size (5% vs. 2.5%) of each rejection region for a two-tailed test. As a result, the null hypothesis can be rejected with a less extreme result if a one-tailed test was used. The one-tailed test is only more powerful than a two-tailed test if the specified direction of the alternative hypothesis is correct. If it is wrong, however, then the one-tailed test has no power.

Formulation of Hypothesis

Meaning of Hypothesis

Hypothesis is a tentative, testable statement that predicts a relationship between two or more variables. It is formulated based on existing theory, observation, and review of literature. A hypothesis provides direction to research by specifying what the researcher expects to find. It serves as a basis for data collection, analysis, and interpretation, and helps in drawing meaningful conclusions from the study.

Meaning of Formulation of Hypothesis

Formulation of hypothesis refers to the process of developing a clear, precise, and testable statement based on the research problem. It involves transforming assumptions and expectations into scientifically testable propositions. Proper formulation helps in narrowing the scope of research and defining the relationship between variables, ensuring clarity and focus throughout the study.

Need for Formulation of Hypothesis

  • Provides Clear Direction to Research

Formulation of a hypothesis provides a clear direction to the research process. It specifies what the researcher intends to study and what outcomes are expected. By defining the relationship between variables, a hypothesis narrows the scope of investigation and prevents unnecessary exploration. This clarity helps the researcher remain focused on the research problem and ensures that all activities are aligned with the study objectives.

  • Helps in Defining Research Objectives

A hypothesis assists in clearly defining research objectives. It translates the research problem into specific, measurable propositions that guide the formulation of objectives. With a well-formulated hypothesis, objectives become precise and achievable. This ensures logical consistency between the problem statement, objectives, and research design, thereby strengthening the overall structure and coherence of the research study.

  • Facilitates Selection of Research Design

The formulation of a hypothesis plays an important role in selecting an appropriate research design. It indicates whether the study should be exploratory, descriptive, or causal. Based on the hypothesis, the researcher can choose suitable methods, tools, and techniques for data collection and analysis. This ensures that the research methodology is relevant and scientifically sound.

  • Guides Data Collection Process

A hypothesis provides guidance for collecting relevant data. It helps the researcher identify what data is required, from whom it should be collected, and how it should be measured. By focusing only on variables mentioned in the hypothesis, unnecessary data collection is avoided. This targeted approach improves efficiency and enhances the accuracy and relevance of the collected data.

  • Supports Statistical Analysis

Formulation of a hypothesis is essential for statistical testing and analysis. Hypotheses provide the basis for applying statistical tools and techniques to test relationships between variables. Null and alternative hypotheses allow the researcher to objectively analyze data and draw valid conclusions. Without a hypothesis, statistical analysis lacks purpose and direction, reducing the scientific value of research.

  • Enhances Objectivity in Research

A hypothesis helps maintain objectivity by reducing researcher bias. Since hypotheses are formulated before data collection, they prevent manipulation of results to suit personal expectations. The researcher relies on empirical evidence to accept or reject the hypothesis. This ensures fairness, transparency, and scientific integrity throughout the research process.

  • Links Theory with Observation

One important need for hypothesis formulation is to connect theoretical concepts with real-world observations. Hypotheses are derived from existing theories and tested through empirical data. This linkage helps in validating theories or modifying them based on findings. Thus, hypothesis formulation plays a crucial role in theory development and advancement of knowledge.

  • Helps in Drawing Meaningful Conclusions

A hypothesis provides a framework for interpreting research findings. It helps the researcher evaluate results in a logical and systematic manner. By testing hypotheses, conclusions become evidence-based and reliable. This need ensures that research outcomes are meaningful, relevant, and useful for academic, practical, or policy-related purposes.

Types of Hypothesis

1. Null Hypothesis (H₀)

The null hypothesis states that there is no relationship, difference, or effect between the variables under study. It assumes that any observed change is due to chance or random factors. In statistical testing, the null hypothesis is the basis for analysis and is either accepted or rejected. It helps in maintaining objectivity and provides a standard for comparison in research.

2. Alternative Hypothesis (H₁ or Ha)

The alternative hypothesis is the opposite of the null hypothesis. It states that a relationship, difference, or effect does exist between variables. This hypothesis is accepted when the null hypothesis is rejected based on empirical evidence. It reflects the actual expectation of the researcher and provides direction for the study’s conclusions.

3. Simple Hypothesis

A simple hypothesis involves only one independent variable and one dependent variable. It predicts a direct relationship between these two variables. Due to its limited scope, it is easy to test and interpret. Simple hypotheses are commonly used in basic research studies where the focus is on a single cause-and-effect relationship.

4. Complex Hypothesis

A complex hypothesis involves two or more independent variables, dependent variables, or both. It predicts relationships among multiple variables simultaneously. Such hypotheses are used in advanced research where phenomena are influenced by several factors. Although complex, they provide a more comprehensive understanding of real-life situations.

5. Directional Hypothesis

A directional hypothesis clearly specifies the direction of the relationship between variables. It indicates whether the effect will be positive or negative. For example, it may state that an increase in one variable leads to an increase or decrease in another. Directional hypotheses are based on prior knowledge or strong theoretical support.

6. Non-Directional Hypothesis

A non-directional hypothesis states that a relationship or difference exists between variables but does not specify the direction. It is used when there is insufficient prior evidence to predict the nature of the relationship. This type of hypothesis allows the researcher to remain open to multiple possible outcomes.

7. Statistical Hypothesis

A statistical hypothesis is expressed in statistical terms and is tested using statistical techniques. It includes null and alternative hypotheses formulated in terms of population parameters. Statistical hypotheses provide a quantitative basis for decision-making and are essential for hypothesis testing in empirical research.

8. Research Hypothesis

A research hypothesis is a tentative statement framed in conceptual terms, indicating the expected relationship between variables. It is often converted into statistical hypotheses for testing. Research hypotheses guide the entire study by linking theory with empirical investigation and helping in drawing meaningful conclusions.

Steps in Formulation of Hypothesis

Step 1. Identification of the Research Problem

The first step in formulating a hypothesis is identifying the research problem clearly. A well-defined problem highlights the issue to be studied and provides the foundation for hypothesis development. Understanding the problem helps the researcher focus on specific aspects of the study and avoid vague assumptions. Clear problem identification ensures that the hypothesis is relevant, meaningful, and directly related to the research objective.

Step 2. Review of Relevant Literature

Review of literature is a crucial step in hypothesis formulation. Existing theories, research studies, and findings are examined to understand established relationships between variables. Literature review helps in identifying research gaps and theoretical frameworks. It ensures that the hypothesis is grounded in existing knowledge and avoids duplication of earlier research, thereby enhancing the originality and relevance of the study.

Step 3. Identification of Variables

At this stage, the researcher identifies the key variables involved in the study. These include independent variables, dependent variables, and sometimes control variables. Understanding variables helps in determining what factors are expected to influence outcomes. Clear identification of variables ensures that the hypothesis is specific, testable, and measurable, which is essential for effective data collection and analysis.

Step 4. Establishing Relationship Between Variables

Once variables are identified, the researcher establishes a logical or theoretical relationship between them. This relationship may indicate cause and effect, association, or difference. Logical reasoning, theory, and prior evidence are used to determine how variables interact. This step helps in framing a hypothesis that explains or predicts the nature of the relationship to be tested.

Step 5. Formulation of a Tentative Statement

After establishing relationships, a tentative statement is framed in the form of a hypothesis. This statement predicts the expected outcome or relationship between variables. It should be clear, simple, and specific. The hypothesis may be stated in null or alternative form, depending on the research requirement. This step transforms assumptions into a testable proposition.

Step 6. Ensuring Testability and Clarity

The formulated hypothesis is then examined to ensure that it is testable and clearly stated. It should be capable of empirical verification through observation, experimentation, or statistical analysis. Ambiguous or abstract statements are revised. This step ensures that the hypothesis can be practically tested using available research methods and data.

Step 7. Consistency with Research Objectives

The hypothesis must be consistent with the research objectives and problem statement. This step involves checking whether the hypothesis aligns with the overall purpose of the study. Consistency ensures logical flow between the problem, objectives, hypothesis, and methodology. A mismatch can lead to confusion and weak research outcomes.

Step 8. Finalization of Hypothesis

The final step is refining and finalizing the hypothesis in a precise and formal manner. It is stated in clear language, free from bias and ambiguity. Once finalized, the hypothesis guides data collection, analysis, and interpretation. A well-formulated hypothesis strengthens the scientific nature and credibility of the research study.

Characteristics of a Good Hypothesis

  • Clear and Precise

A good hypothesis must be clearly and precisely stated. It should convey its meaning without ambiguity so that the researcher and readers understand exactly what is being tested. Clear wording helps in proper interpretation and avoids confusion during data collection and analysis. Precision ensures that the hypothesis focuses only on the specific variables and relationships under study.

  • Testable and Verifiable

A good hypothesis should be capable of being tested through empirical observation, experimentation, or statistical analysis. It must allow verification using available research methods and data. Hypotheses that cannot be tested scientifically lack practical value. Testability ensures that the hypothesis can be accepted or rejected based on objective evidence.

  • Based on Existing Knowledge

A sound hypothesis is grounded in existing theories, concepts, or previous research findings. It is not based on mere assumptions or guesses. Drawing from established knowledge ensures logical consistency and increases the likelihood of meaningful results. This characteristic also helps in linking the hypothesis with the theoretical framework of the study.

  • States Relationship Between Variables

A good hypothesis clearly states the relationship between two or more variables. It specifies how one variable is expected to influence or relate to another. Clear identification of independent and dependent variables makes the hypothesis focused and measurable. This characteristic is essential for designing research methods and conducting analysis.

  • Simple and Specific

Simplicity and specificity are important characteristics of a good hypothesis. It should be expressed in simple language and focus on a limited number of variables. Overly complex hypotheses are difficult to test and interpret. Specific hypotheses provide clear guidance for research and reduce the chances of misinterpretation.

  • Consistent with Research Objectives

A good hypothesis must align with the research objectives and problem statement. Consistency ensures logical flow throughout the research process. If the hypothesis does not match the objectives, the study may lose focus and coherence. This characteristic helps in maintaining unity between problem identification, hypothesis formulation, and data analysis.

  • Objective and Free from Bias

Objectivity is a key characteristic of a good hypothesis. It should not reflect the researcher’s personal beliefs or expectations. The hypothesis must be framed in a neutral manner, allowing unbiased testing. Objectivity ensures scientific integrity and increases the credibility and reliability of research findings.

  • Limited in Scope

A good hypothesis has a limited and well-defined scope. It should not be too broad or vague, as this makes testing difficult. A limited scope ensures feasibility and allows in-depth analysis. This characteristic helps in managing time, resources, and data effectively during the research process.

  • Logical and Consistent

Logical reasoning is essential in hypothesis formulation. A good hypothesis should follow logically from the research problem and existing theory. It must be internally consistent and free from contradictions. Logical hypotheses enhance clarity, support systematic investigation, and contribute to meaningful conclusions.

Plagiarism in research

Plagiarism means presenting someone else’s work as your own. In academic writing, plagiarizing involves using words, ideas, or information from a source without including a proper citation.

Plagiarism is the representation of another author’s language, thoughts, ideas, or expressions as one’s own original work. In educational contexts, there are differing definitions of plagiarism depending on the institution. Plagiarism is considered a violation of academic integrity and a breach of journalistic ethics. It is subject to sanctions such as penalties, suspension, expulsion from school or work, substantial fines and even incarceration. Recently, cases of “extreme plagiarism” have been identified in academia. The modern concept of plagiarism as immoral and originality as an ideal emerged in Europe in the 18th century, particularly with the Romantic movement.

Generally, plagiarism is not in itself a crime, but like counterfeiting fraud can be punished in a court for prejudices caused by copyright infringement, violation of moral rights, or torts. In academia and industry, it is a serious ethical offense. Plagiarism and copyright infringement overlap to a considerable extent, but they are not equivalent concepts, and many types of plagiarism do not constitute copyright infringement, which is defined by copyright law and may be adjudicated by courts.

Plagiarism can have serious consequences for students and researchers, even when it’s done accidentally. To avoid plagiarism, it’s important to keep track of your sources and cite them correctly.

Within academia, plagiarism by students, professors, or researchers is considered academic dishonesty or academic fraud, and offenders are subject to academic censure, up to and including expulsion. Some institutions use plagiarism detection software to uncover potential plagiarism and to deter students from plagiarizing. However, plagiarism detection software does not always yield accurate results and there are loopholes in these systems. Some universities address the issue of academic integrity by providing students with thorough orientations, required writing courses, and clearly articulated honor codes. Indeed, there is a virtually uniform understanding among college students that plagiarism is wrong. Nevertheless, each year students are brought before their institutions’ disciplinary boards on charges that they have misused sources in their schoolwork. However, the practice of plagiarizing by use of sufficient word substitutions to elude detection software, known as rogeting, has rapidly evolved as students and unethical academics seek to stay ahead of detection software.

An extreme form of plagiarism, known as “contract cheating”, involves students paying someone else, such as an essay mill, to do their work for them.

Academia

No universally adopted definition of academic plagiarism exists. However, this section provides several definitions to exemplify the most common characteristics of academic plagiarism. It has been called, “The use of ideas, concepts, words, or structures without appropriately acknowledging the source to benefit in a setting where originality is expected.”

This is an abridged version of Teddi Fishman’s definition of plagiarism, which proposed five elements characteristic of plagiarism. According to Fishman, plagiarism occurs when someone:

  • Attributable to another identifiable person or source.
  • Uses words, ideas, or work products.
  • In a situation in which there is a legitimate expectation of original authorship.
  • Without attributing the work to the source from which it was obtained
  • In order to obtain some benefit, credit, or gain which need not be monetary.

Type:

Accidental Plagiarism

Accidental plagiarism occurs when a person neglects to cite their sources, or misquotes their sources, or unintentionally paraphrases a source by using similar words, groups of words, and/or sentence structure without attribution. Students must learn how to cite their sources and to take careful and accurate notes when doing research. Lack of intent does not absolve the student of responsibility for plagiarism. Cases of accidental plagiarism are taken as seriously as any other plagiarism and are subject to the same range of consequences as other types of plagiarism.

Self Plagiarism

Self-plagiarism occurs when a student submits his or her own previous work, or mixes parts of previous works, without permission from all professors involved. For example, it would be unacceptable to incorporate part of a term paper you wrote in high school into a paper assigned in a college course. Self-plagiarism also applies to submitting the same piece of work for assignments in different classes without previous permission from both professors.

Direct Plagiarism

Direct plagiarism is the word-for-word transcription of a section of someone else’s work, without attribution and without quotation marks. The deliberate plagiarism of someone else’s work is unethical, academically dishonest, and grounds for disciplinary actions, including expulsion.

Mosaic Plagiarism

Mosaic Plagiarism occurs when a student borrows phrases from a source without using quotation marks, or finds synonyms for the author’s language while keeping to the same general structure and meaning of the original. Sometimes called “patch writing,” this kind of paraphrasing, whether intentional or not, is academically dishonest and punishable even if you footnote your source.

Avoiding plagiarism

  • When you want to express an idea or information from a source, paraphrase or summarize it entirely in your own words.
  • When you want to include an exact phrase, sentence or passage from a source, use a quotation.
  • Always cite the source when you quote, paraphrase, or summarize.

Measures to overcome Plagiarism

Paraphrase your content

Do not copy–paste the text verbatim from the reference paper. Instead, restate the idea in your own words.

Use Quotations

Use quotes to indicate that the text has been taken from another paper. The quotes should be exactly the way they appear in the paper you take them from.

Cite your Sources

Identify what does and does not need to be cited.

Any words or ideas that are not your own but taken from another paper need to be cited.

Cite Your Own Material If you are using content from your previous paper, you must cite yourself. Using material you have published before without citation is called self-plagiarism.

Maintain records of the sources you refer to

Use multiple references for the background information/literature survey. For example, rather than referencing a review, the individual papers should be referred to and cited.

Types of Variables in relation to research design

Independent and Dependent Variables

In general, experiments purposefully change one variable, which is the independent variable. But a variable that changes in direct response to the independent variable is the dependent variable. Say there’s an experiment to test whether changing the position of an ice cube affects its ability to melt. The change in an ice cube’s position represents the independent variable. The result of whether the ice cube melts or not is the dependent variable.

Intervening variables

An intervening variable, sometimes called a mediator variable, is a theoretical variable the researcher uses to explain a cause or connection between other study variables usually dependent and independent ones. They are associations instead of observations. For example, if wealth is the independent variable, and a long life span is a dependent variable, the researcher might hypothesize that access to quality healthcare is the intervening variable that links wealth and life span.

Moderating variables

A moderating or moderator variable changes the relationship between dependent and independent variables by strengthening or weakening the intervening variable’s effect. For example, in a study looking at the relationship between economic status (independent variable) and how frequently people get physical exams from a doctor (dependent variable), age is a moderating variable. That relationship might be weaker in younger individuals and stronger in older individuals.

Constant or Controllable Variable

Sometimes certain characteristics of the objects under scrutiny are deliberately left unchanged. These are known as constant or controlled variables. In the ice cube experiment, one constant or controllable variable could be the size and shape of the cube. By keeping the ice cubes’ sizes and shapes the same, it’s easier to measure the differences between the cubes as they melt after shifting their positions, as they all started out as the same size.

Extraneous variables

Extraneous variables are factors that affect the dependent variable but that the researcher did not originally consider when designing the experiment. These unwanted variables can unintentionally change a study’s results or how a researcher interprets those results. Take, for example, a study assessing whether private tutoring or online courses are more effective at improving students’ Spanish test scores. Extraneous variables that might unintentionally influence the outcome include parental support, prior knowledge of a foreign language or socioeconomic status.

Qualitative variables

Qualitative, or categorical, variables are non-numerical values or groupings. Examples might include eye or hair color. Researchers can further categorize qualitative variables into three types:

  • Binary: Variables with only two categories, such as male or female, red or blue.
  • Nominal: Variables you can organize in more than two categories that do not follow a particular order. Take, for example, housing types: Single-family home, condominium, tiny home.
  • Ordinal: Variables you can organize in more than two categories that follow a particular order. Take, for example, level of satisfaction: Unsatisfied, neutral, satisfied.

Quantitative variables

Quantitative variables are any data sets that involve numbers or amounts. Examples might include height, distance or number of items. Researchers can further categorize quantitative variables into two types:

  • Discrete: Any numerical variables you can realistically count, such as the coins in your wallet or the money in your savings account.
  • Continuous: Numerical variables that you could never finish counting, such as time.

Composite variables

A composite variable is two or more variables combined to make a more complex variable. Overall health is an example of a composite variable if you use other variables, such as weight, blood pressure and chronic pain, to determine overall health in your experiment.

Confounding variables

A confounding variable is one you did not account for that can disguise another variable’s effects. Confounding variables can invalidate your experiment results by making them biased or suggesting a relationship between variables exists when it does not. For example, if you are studying the relationship between exercise level (independent variable) and body mass index (dependent variable) but do not consider age’s effect on these factors, it becomes a confounding variable that changes your results.

Sources of problem formulation

Identification of research problem refers to the sense of awareness of a prevalent social problem, a social phenomenon or a concept that is worth study as it requires to be investigated to understand it. The researcher identifies such a research problem through his observation, knowledge, wisdom and skills.

The identification of a problem to study can be challenging, not because there’s a lack of issues that could be investigated, but due to the challenge of formulating an academically relevant and researchable problem which is unique and does not simply duplicate the work of others. To facilitate how you might select a problem from which to build a research study, consider these sources of inspiration:

Relevant Literature

The selection of a research problem can be derived from a thorough review of pertinent research associated with your overall area of interest. This may reveal where gaps exist in understanding a topic or where an issue has been understudied. Research may be conducted to:

1) Fill such gaps in knowledge.

2) Evaluate if the methodologies employed in prior studies can be adapted to solve other problems.

3) Determine if a similar study could be conducted in a different subject area or applied in a different context or to different study sample [i.e., different setting or different group of people].

Also, authors frequently conclude their studies by noting implications for further research; read the conclusion of pertinent studies because statements about further research can be a valuable source for identifying new problems to investigate. The fact that a researcher has identified a topic worthy of further exploration validates the fact it is worth pursuing.

Personal Experience

Don’t undervalue your everyday experiences or encounters as worthwhile problems for investigation. Think critically about your own experiences and/or frustrations with an issue facing society, your community, your neighborhood, your family, or your personal life. This can be derived, for example, from deliberate observations of certain relationships for which there is no clear explanation or witnessing an event that appears harmful to a person or group or that is out of the ordinary.

Interviewing Practitioners

The identification of research problems about particular topics can arise from formal interviews or informal discussions with practitioners who provide insight into new directions for future research and how to make research findings more relevant to practice. Discussions with experts in the field, such as, teachers, social workers, health care providers, lawyers, business leaders, etc., offers the chance to identify practical, “real world” problems that may be understudied or ignored within academic circles. This approach also provides some practical knowledge which may help in the process of designing and conducting your study.

Interdisciplinary Perspectives

Identifying a problem that forms the basis for a research study can come from academic movements and scholarship originating in disciplines outside of your primary area of study. This can be an intellectually stimulating exercise. A review of pertinent literature should include examining research from related disciplines that can reveal new avenues of exploration and analysis. An interdisciplinary approach to selecting a research problem offers an opportunity to construct a more comprehensive understanding of a very complex issue that any single discipline may be able to provide.

Deductions from Theory

This relates to deductions made from social philosophy or generalizations embodied in life and in society that the researcher is familiar with. These deductions from human behavior are then placed within an empirical frame of reference through research. From a theory, the researcher can formulate a research problem or hypothesis stating the expected findings in certain empirical situations. The research asks the question: “What relationship between variables will be observed if theory aptly summarizes the state of affairs?” One can then design and carry out a systematic investigation to assess whether empirical data confirm or reject the hypothesis, and hence, the theory.

Research Method vs Research Methodology

Research is a method to dig up a subject to its core and gain knowledge and make theories from it. Anything can be termed as research which helps you to gain knowledge about science, culture, or sociology. There are mainly three types of research scientific, artistic, and historical researches. All the theories and facts that come up from these researches are determined from different methods and steps.

Research Method

Research Method mean the research techniques or tools to be used for conducting research irrespective of whether the research belongs to physical or social sciences or any other disciplines.

Groups.

  • The first group includes methods dealing with collection and description of data.
  • The second group consists of techniques used for establishing a statistical relationship between variables.
  • The third group deals with methods used to evaluate the reliability, validity, and accuracy of the results discerned by the data.

Research Methodology

The research methodology is a way to study the various steps that are generally adopted by a researcher in studying his research problems systematically, along with the logic, assumptions, justification, and rationale behind them. Process of governing the research methods. Its analysis which way the research method is moving to and guides the methods to the right path to fulfil the objective of the experiment. This is huge two-dimensional planning in which research methods are just a part. It consists of many processes after the research methods. A successful scientific methodology leads to a successful experiment.

The researcher takes an overview of various steps that are chosen by him in understanding the problem at hand, along with the logic behind the methods employed by the researcher during study. It also clarifies the reason for using a particular method or technique, and not others, so that the results obtained can be assessed either by the researcher himself or any other party.

A researcher’s methodology aims at answering such questions as:

  • How has been the research problem defined?
  • Why was this particular group of people interviewed and not the other groups?
  • How many individuals provided the answers on which the researcher’s conclusions were based?
  • In what way and why has been the research hypothesis formulated?
  • Why were these particular techniques used to analyze data?
  • What level of evidence was used to determine whether or not to reject the stated hypothesis?
Research Method Research Methodology
It is the process of doing surveys, collecting data, and doing analysis. It is the process of studying the research methods.
 The aim is to find the result of the research. The aims are to secure that the research procedure is going on the correct path to becoming successful.
This is mostly applicable at the final stages of the researches. This is applicable from the initial stages of the researches.
It consists of surveys, data collection, and analysis It consists of a systematic strategy.
It is a part of research methodology. It is huge planning and procedure for successful researches.
Behavior and instrument used in the selection and construction of the research technique. Science of understanding, how research is performed methodically.
Carrying out experiment, test, surveys and so on. Study different techniques which can be utilized in the performance of experiment, test, surveys etc.
Different investigation techniques. Entire strategy towards achievement of objective.
To discover solution to research problem. To apply correct procedures so as to determine solutions.

error: Content is protected !!