The reliability of a test refers to its ability to measure a given trait consistently. If the outcome of a measure varies each time it is applied, the measure is not reliable. There are several forms of reliability, depending on the format and purpose of the test. Interitem consistency means that the individual items of a test are inter-correlated, or they are well related to each other. This form of reliability is used with questionnaires in which multiple items are used to rate one trait. Test-retest reliability measures how well an initial administration of a test correlates with a repeated administration. This is only useful if the trait measured is unlikely to change much over time. Interrater reliability is used with semi-structured questionnaires and other instruments in which the rater must use complex subjective judgments in the scoring. An instrument has inter-rater reliability when two or more raters rate the same material the same way.