The purpose of this study is to investigate the effect of individualized feedback to raters of Korean writing assessment on their severity, consistency, and bias. 20 scripts were rated independently by 22 Korean language teachers. The many-facet Rasch model was used to generate individualized feedback reports on each rater’s relative severity, overall consistency, and significant bias patterns with respect to particular categories of the rating scale. Reports were given and explicated to each rater at a feedback session. Raters were then asked to rate a further set of 20 scripts. A comparison of ratings before and after feedback revealed that individual feedback had positive effects on their ratings (again using the many-facet Rasch model). The range of raters’ severity was reduced after the feedback session, indicating a higher level of agreement between raters compared to the ratings before the feedback. The individualized feedback also had an influence on the rater consistency; raters who were classified as over-fitted or mis-fitted improved their consistency after receiving the feedback. In addition, rater bias in relation to particular categories was all removed. (Korea University)