The adaptive nature of ACA comes in part from running a regression for each paired comparison question in order to try to ask "good" trade-offs of the respondent. Sometimes a respondent will get a dominated question (two good levels versus two bad levels) on the screen.
The two most common causes are the a priori settings for an attribute are wrong (you had 2-year, 3-year, 4-year warranty set to be Worst to Best, for example), or the person taking the survey is answering randomly. Because of the regression, if you don't give ACA good answers, it can follow your lead and provide dominated paired comparisons.
Even with "good" answers, the rough utility estimates might be a bit noisy, so we think somewhere around 5% of the time a respondent could see a dominated paired comparison. In these cases, the respondent can easily choose the dominated pair (probably using the extreme side of the scale), the regression will update, and it should correct the utility estimated to avoid the dominated pair in the future. As the respondent provides more and more data, the utility estimates should get better and better and the chance for a dominated pair should go down with each (quality) answer from the respondent.