The ability to ascribe mental states, such as beliefs or desires to oneself and other individuals forms an integral part of everyday social interaction. Animations tasks, in which observers watch videos of interacting triangles, have been extensively used to test mental state attribution in a variety of clinical populations. Compared to control participants, individuals with clinical conditions such as autism typically offer less appropriate mental state descriptions of such videos. Recent research suggests that stimulus kinematics and movement similarity (between the video and the observer) may contribute to mental state attribution difficulties. Here we present a novel adaptation of the animations task, suitable to track and compare animation generator and -observer kinematics. Using this task and a population-derived stimulus database, we confirmed the hypotheses that an animation’s jerk and jerk similarity between observer and animator significantly contribute to the correct identification of an animation. By employing random forest analysis to explore other stimulus characteristics, we reveal that other indices of movement similarity, including acceleration- and rotation-based similarity, also predict performance. Our results highlight the importance of movement similarity between observer and animator and raise new questions about reasons why some clinical populations exhibit difficulties with this task.