What is Kuder Richardson method of reliability?

What is Kuder Richardson method of reliability?

Kuder-Richardson Formula 20, or KR-20, is a measure reliability for a test with binary variables (i.e. answers that are right or wrong). Reliability refers to how consistent the results from the test are, or how well the test is actually measuring what you want it to measure.

How do you calculate the reliability of the KR21 index?

The formula for KR21 for scale score X is K/(K-1) * (1 – U*(K-U)/(K*V)) , where K is the number of items,U is the mean of X and V is the variance of X.

What is KR-20 used for?

The Kuder and Richardson Formula 20 (KR20) is used to estimate the reliability of binary measurements, to see if the items within the tests obtained the same binary (right/wrong) results over a population of testing subjects.

What is the difference between KR-20 and KR 21?

KR-21 is a simplified version of KR-20, which can be used when the difficulty of all items on the test are known to be equal. Like KR-20, KR-21 was first set forth as the twenty-first formula discussed in Kuder and Richardson’s 1937 paper. Similarly to KR-20, K is equal to the number of items.

What does a low KR-20 mean?

Usually, a KR20 figure of 0.8 is considered the minimal acceptable value. A figure below 0.8 could indicate that the exam was not reliable. The KR20 is influenced by difficulty, spread in scores, and length of the examination.

What does a negative KR 20 mean?

KR-20 scores range from 0-1 (although it is possible to obtain a negative score); 0 indicates no reliability and 1 represents perfect test reliability. A KR-20 score above 0.70 is generally considered to represent a reasonable level of internal consistency reliability.

What is a scorer reliability?

Scorer reliability refers to the consistency with which different people who score the same test agree. For a test with a definite answer key, scorer reliability is of negligible concern. When the subject responds with his own words, handwriting, and organization of subject matter, however,…

How is Scorer reliability established?

Score reliability is defined as the consistency and stability of scores obtained from a specific test for a particular group of people (Thompson, 2003). A second type, intrarater reliability, is established when a rater completes the same assessment on two or more occasions.

How reliable is kr21?

When KR21 has been used to estimate a test’s reliability, the user should note that the test has a lower bound of internal consistency reliability, particularly when the item difficulty range is great. (Author/GDC)

How reliable is Cronbach’s Alpha for KR-20?

If you have a test with more than two answer possibilities (or opportunities for partial credit), use Cronbach’s Alpha instead. The scores for KR-20 range from 0 to 1, where 0 is no reliability and 1 is perfect reliability. The closer the score is to 1, the more reliable the test.

What is the difference between aggregate and kr21?

The KR21 formula uses the population (“biased”) estimate of the scale score variance, whereas Aggregate computes the sample (“unbiased”) estimate. Also, Aggregate will save the standard deviation of a variable to the new data set, but does not have a function to save the variance.

What is an acceptable KR-20 score?

The scores for KR-20 range from 0 to 1, where 0 is no reliability and 1 is perfect reliability. The closer the score is to 1, the more reliable the test. Just what constitites an “acceptable” KR-20 score depends on the type of test. In general, a score of above .5 is usually considered reasonable.

author

Back to Top