The present paper aimed at investigating the effect of different facets of KSL writing assessment on its outcome and interrater reliability of scores using generalizability theory and alpha coefficient estimation. A series of generaliazability studies and design studies were conducted to examine multiple sources of errors contributed to scores including the facets of rater, rating criteria and their possible interactions. In addition, alpha coefficient was estimated to be compared with the ratio of examinees’ variance to the variance in observed scores and generalzability coefficient. The results showed that the facets of rater and interaction between the rater and the examinee facets as error sources influenced the variance in observed scores to a great extent while interrater reliability varied as a function of different combinations of raters. However, it was revealed that the magnitude of alpha coefficient was not proportional to the ratio of examinees’ variance to the variance in observed scores, although it agreed with the magnitude of generalzability coefficient Finally, the results from the post-rating survey indicated low level of variability of ratings among rating criteria (halo effect). These findings have practical implications for raised awareness of interrater reliability and rater training in the context of KSL writing assessments.