© Nevit Dilmen [CC BY-SA 3.0 or GFDL], via Wikimedia Commons

What is Reliability and Validity in Research

Definitions of reliability and validity

Reliability

In the research, reliability is the degree to which the results of the research are consistent and repeatable. Researchers repeat research again and again in different settings to compare the reliability of the research. Theories are developed from the research inferences when it proves to be highly reliable.

In experiments, the question of reliability can be overcome by repeating the experiments again and again. In social sciences, the researcher uses logic to achieve more reliable results. However, in social sciences it is difficult to achieve reliability in the data collection, because, human behaviors are difficult to repeat even in similar situations. There are several external factors that influence the human behaviors and it is very important to know the effect of any such factor other than the independent variable.

Examples

Types of reliability

  • Test-retest reliability

Test-retest reliability measures the reliability of the measure over a period of time. In social sciences, a test is administered more than one time over a period of time to check or retest its reliability. In natural sciences, the researcher conducts experiment more than one time to ascertain its reliability. The results of the tests and the inferences drawn have to be applied to natural settings, they should be reliable. This method of testing the reliability of the test is time-consuming, since the researcher has to wait for some time to re-administer the test.

  • Parallel forms reliability

It measures the reliability of the test by administering it in two different forms. Both forms of the test measure the same variables under study, but the format of the measure is different. The researcher must be able to formulate two different tests that measure the same variables. The difficulty arises in formulating two tests that are similar in nature and measurement level. The researcher may also find it difficult to administer it to two similar populations. In social sciences, using parallel forms of the same test is difficult and subjectivity is highly involved.

  • Inter-rater reliability

Inter-rater reliability check is used to measure the test by more than one rater or judge. The researcher asks more than one people to rate the reliability of his test. This type of reliability test is useful for subjective measures where more than one rater can best describe the reliability of the test.

  • Internal consistency reliability

Kumar R. (2000.a) in Research Methodology stated that he idea behind internal consistency reliability is that items measuring the same phenomenon should produce similar results. [1] Which means that the items, that test the attitude or behavior, are divided into half. Each half is tested separately and then their scores are correlated. For example, the researcher has developed questionnaire to test the attitude of people towards any state program. The researcher might divide the questions on the questionnaire in half and administer both questionnaires separately. The resultant scores of the test will be correlated to know the internal consistency reliability.

Validity

Validity refers to the accurateness of the research as a whole and the accuracy of each step independently. It is the highest aim every researcher wants to achieve. When we measure what we have intended to measure we reach a conclusion that is valid and verifiable.

According to Kerlinger, ‘the commonest definition of the validity is epitomized by the question: Are we measuring what we think we are measuring’. The first step to achieving validity in the research is to develop research objectives that really target the research questions that you have formulated. Kumar, R. (2000.b) in his book Research Methodology states that to establish the validity in research, the researcher should use logic and statistical evidence. [2]

The concept of validity was formulated by Kelly (1927) who stated that a test is valid if it measures what it claims to measure.[3]

  • Internal validity

Internal validity refers to the accurateness of the data collection tools, procedures, and techniques. In other words, it is the accuracy of the internal design that needs to be controlled by the researcher. An inaccurate research design and techniques cannot yield accurate and valid results.

McLeod, S. A. (2007) states that internal validity refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor. In-other-words there is a causal relationship between the independent and dependent variable.[4] In experimental designs the researcher needs to be very careful in measuring the variables, the external variables can have an impact on the outcomes of the research and it can ruin the research.

  • External validity

External validity refers to the extent to which the outcomes of the research can be generalized to the population. Your research findings should be able to be applied to a larger audience. Some experiments, that are conducted in a lab, cannot be generalized to the natural settings. Conducting experiments in natural settings can help improve external validity of the research. External validity aims at a far reach and scope of the research, you cannot limit your research to the laboratory where you have conducted the experiments.

Examples

There can several examples to illustrate internal and external validity in research. For example,

Types of validity

  • Face validity

Face validity as the name suggests shows the face-off value of the research or the measures used in the research. Face validity is not an authentic way to check the validity of the research. It is a general validity measure for the common people. Face validity can be tested by people who are taking the test because they can better decide whether the measure is appropriate or not. The researcher can also ask some experts in his field to check the measure and its validity.

  • Construct validity

To check the validity of the construct a panel of experts are hired. They check whether the construct measures what it needs to measure. The measure should measure what it is intended for and not some external factor.

  • Criterion-related validity

Sometimes the instrument is developed to observe some criteria. The validity of the criteria can be judged by comparing it with another future assessment, if the future assessment proves to be successful it shows that the criteria or the test devised to test a behavior was valid and should be used again.

Note:

Reliability and validity are the two most important characteristics of the research. Every research whether it is in social science or physical science, literature or art there should be reliability and validity in the research.

References

  1. Kumar, R. (2000.a) Research Methodology. Sage Pub. London. Pp-140
  2. Kumar, R. (2000.b) Research Methodology. Sage Pub. London. Pp-137-138
  3. Kelley, T. L. (1927). Interpretation of educational measurements. New York: Macmillan.
  4. McLeod, S. A. (2007). What is Validity?. Retrieved from http://www.simplypsychology.org/validity.html

Comments

comments

Check Also

Why Market/Business Research?

Marketers know how important it is to do market research and what is its significance …

Leave a Reply

Your email address will not be published. Required fields are marked *

Please Answer *