Motto Kami

Pendidik Agen Transformasi

Ikrar Kami

Kami berjanji akan memberikan sepenuh komitmen untuk bekerja sebagai satu pasukan yang mantap

Validity and reliability



Definition of Validity

Validity refers to the appropriateness of the interpretation of the results and not to the test itself, though colloquially we speak about a test being valid.

Category of Validity

>> Content/Face validity

  1. Content validity addresses the match between test questions and the content or subject area they are intended to assess
  2. Face Validity refers to the extent to which a test or the questions on a test appear to measure a particular construct as viewed by examinees
>> Validity related to the criteria

A criterion-related validation study can be either predictive of later behavior or a concurrent measure of behavior or knowledge.

  • Predictive validity refers to the "power" or usefulness of test scores to predict future performance.

  • Concurrent Validity needs to be examined whenever one measure is substituted for another, such as allowing students to pass a test instead of taking a course.

>> Validity of constructs
  1. Construct validity refers to the degree to which a test or other measure assesses the underlying theoretical construct it is supposed to measure.
  2. Divided into two:
  • Convergent validity consists of providing evidence that two tests that are believed to measure closely related skills or types of knowledge correlate strongly. That is to say, the two different tests end up ranking students similarly.

  • Discriminant validity, by the same logic, consists of providing evidence that two tests that do not measure closely related skills or types of knowledge do not correlate strongly

Factors that influence the validity
  1. Preparation of questions paper
  2. Preparation of marking scheme
  3. assesment of answer scheme.

Definition of Reability

Reliability is synonymous with consistency. it is the degree to which test scores for a an individual test taker or group of test takers are consistent over repeated applications.


Determining of Reability

>Internal consistency

Measures the reability of a test solely on the number of items on the test and the intercorrelation among the items. Therefore, it compares each item to every other item.

> Test-retest reliability

measured by computing the correlation coefficient between scores of two administrations.   

> Inter-rater reability

whenever you use humans as a part of your measurement procedure, you have worry about wherther the result you get are reliable or consistent.

> Split-half

refer to determining a correlation between the first half of the measurement and the second half of the measurement.
For example we would expect answer to the first half to be similar to the second half.

Factor that affecting reability

 > Administrator factors

poor or unclear direction given during administration or inaccurate scoring can affect reability.  

> Number of items on the instrument.
the larger the number the items, the greater the chances for high reability.
> Length of time between test and retest.
the shorter the time the greater the chance for high reability correlations coefficient
The relationship between validity and reability
> a test cannot be considered as valid unless the measurement resulting from it are reliable


Anonymous said...

Hi there just wanted to give you a quick heads up and let you know a few of the pictures aren't loading properly. I'm not sure why but
I think its a linking issue. I've tried it in two different browsers and both show the same outcome.

Feel free to surf to my web site; airbnb coupon code 2013