Computational theories of reinforcement learning suggest that two families of algorithm-model-based and model-free-tightly map onto the classic distinction between automatic and deliberate systems of control: Deliberate evaluative responses are thought to reflect model-based algorithms, which are accurate but computationally expensive, whereas automatic evaluative responses are thought to reflect model-free algorithms, which are error-prone but computationally cheap. This framework has animated research on psychological phenomena ranging from habit formation to social learning, moral decision-making, and cognitive development. Here, we propose that model-based and model-free algorithms may not be as aligned with deliberate and automatic evaluative processing as prevailing theories suggest. Across three preregistered behavioral experiments involving adult human participants (total n = 2,572), we show that model-based algorithms shape not only deliberate but also automatic evaluations. Experiment 1 numerically replicates past findings suggesting that deliberate (but not automatic) evaluative responses are uniquely shaped by model-based algorithms but, critically, also reveals confounds that render interpretation of this evidence equivocal. Experiments 2 to 3 eliminate these confounds and reveal robust model-based contributions to automatic evaluative processing across two measures of automatic evaluation, supported by multinomial processing tree modeling. Together, these results suggest that dominant frameworks may considerably underestimate both the ubiquity of model-based algorithms and the computational sophistication of automatic evaluative processing.