In Computerized Adaptive Testing (CAT), questions are selected in real time and are adjusted to the test-taker’s latent ability. While CAT has become popular for many measurement tasks, such as educational testing and patient reported outcomes, it has been criticized for not allowing examinees to review and revise their answers. Two main concerns regarding response revision in CAT are the deterioration of estimation efficiency, due to suboptimal item selection, and the compromise of test validity, due to the potential adoption of deceptive test-taking strategies by the examinees. In this talk, we introduce a new framework for CAT that allows for response revision. We consider three ability estimation algorithms, each of which allows for response revision. The strong consistency and asymptotical normality of the final ability estimators are established under minimal conditions on the test-taker’s revision behavior. Our theoretical results indicate that allowing for response revision in CAT will not hurt the optimality of the CAT design if the responses (including the revised responses) follow the assumed model. These ability estimation algorithms are also compared in several model misspecification situations where different scenarios regarding the test-taking behavior of the students are considered.