Relaibility

R

Reliability refers to the consistency of a measure. A test is considered reliable if we get the same result repeatedly.

E.g.

If a test designed to measure a specific trait, then each time the test is administered to a subject, the results should be approximately the same.

Unfortunately, it is impossible to calculate reliability exactly but there are several different ways to estimate reliability. The different types of reliability that could be estimated are:

Test-Retest Reliability

Inter-rater Reliability

Parallel-Forms Reliability

Internal Consistency Reliability

To gauge test-retest reliability, the test is administered twice at two different points in time. This kind of reliability is used to assess the consistency of a test over a period of time. Test-retest reliability assumes that there will be no change in the quality or construct that is being measured.

Inter-rater reliability is assessed by having two or more independent raters score the test, then comparing the scores to determine the consistency of the raters’ estimates.

Parallel-forms reliability is estimated by comparing different tests that were created using the same content. The two tests should then be administered to the same subjects at the same time.

Internal consistency reliability is used to judge the consistency of results across items on the same test. i.e.  test items that measure the same construct are compared in order to determine the tests internal consistency.

About the author

Add comment

By azu

Categories