PRISM – Spatial Reasoning with Preferred Mental Models



Preferred Inferences in Reasoning with Spatial mental Models (PRISM) is an extension of Spatial Reasoning with mental Models (SRM) by Marco Ragni and Markus Knauff. The program can be downloaded at




The new website of the program is currently under construction. The URL is


The website offers a huge database of geometrical problems which are similar to such used in intelligence tests. Users of the website can train their problem solving strategies by performing different types of task and tests. All tasks are generated according to a database of different problem principles, which were collected from an analysis of different tests in the literature. All tasks are characterized according to their problem structure and stimuli. This allows users to train individually specific types of tasks.

The website was realized by a Karl Steinbuch scholarship.

The URL of the Website is:

Solver IQ Tasks


The program allows the user to design arbitrary geometrical tasks like such used in the Raven or CFT3 intelligence tests. The tasks are solved by the program and their complexity for human subjects is evaluated. The complexity measure is based on an analysis of the empricial data from the CFT3.

To run the program Python2.7 and PyGTK is required.


Complexity IQ-tests

Positioning. The article, published at the European Conference on Cognitive Science, characterizes different types of problems in Cattell’s Culture Fair Test and Evan’s Analogy Problems.

Research Question. Can we classify the problems in Cattell’s Culture Fair Test, Evan’s Analogy Problems and Raven’s Matrices Tests? How good can the functional complexity measure explain the results?

Method. Formal analysis; cognitive complexity measure; empirical analysis

Results. All problem instances can be classified into three general classes of problems. A functional complexity measure, applied to Cattell’s Culture Fair Test and Evans Analogy problems, produces for the latter satisfactorily results (correctness: r = −.62, p < .01; solution time: r = -.67, p < .0001).

Ragni, M., Stahl, P., & Fangmeier, T. (2011). Cognitive Complexity in Matrix Reasoning Tasks. In B. Kokinov, A. Karmiloff-Smith, & N. J. Nersessian (Eds.), Proceedings of the European Conference on Cognitive Science. Sofia: NBU Press.

Complexity Analogical Reasoning

Positioning. The article, published at the European Conference on Artificial Intelligence (ECAI) in 2010, aims to capture human reasoning difficulty in Raven’s reasoning problems.

Research Question. Is it possible to introduce a complexity measure for Cattell’s Culture Fair Test problems wrt. the kind of underlying functions necessary to solve such tasks?

Method. Formal analysis and cognitive modeling.

Results. The investigations lead to the implementation of a Python program, which is able to solve matrix tasks and to evaluate their complexity by a newly developed complexity measure. The predictions of the model is compared with the human performance on solving the item’s from Cattell’s Culture Fair Test and yield to a correlation r = .72, p < .05 without weigths.

Stahl, P., & Ragni, M. (2010). Complexity in Analogy Tasks: An Analysis and Computational Model. In H. Coelho, R. Studer, & M. Wooldridge (Eds.), Proceedings of the 19th European Conference on Artificial Intelligence (Vol. 215, pp. 1041–1042). Amsterdam: IOS Press.