Exp 14: Structural Analysis of fluid IQ-test problems


The experiment written in Flash examines different kinds of problem structures for geometric problems similarly used in the Raven’s Advanced Matrices or Culture Fair Test. The problems have been systematically varied regarding the combination of problem functions.

Link: http://imodspace.iig.uni-freiburg.de/misc/iqcoach_typetest/iqcoach.html



PRISM – Spatial Reasoning with Preferred Mental Models



Preferred Inferences in Reasoning with Spatial mental Models (PRISM) is an extension of Spatial Reasoning with mental Models (SRM) by Marco Ragni and Markus Knauff. The program can be downloaded at





The new website of the program is currently under construction. The URL is



Solver IQ Tasks


The program allows the user to design arbitrary geometrical tasks like such used in the Raven or CFT3 intelligence tests. The tasks are solved by the program and their complexity for human subjects is evaluated. The complexity measure is based on an analysis of the empricial data from the CFT3.

To run the program Python2.7 and PyGTK is required.


Complexity IQ-tests

Positioning. The article, published at the European Conference on Cognitive Science, characterizes different types of problems in Cattell’s Culture Fair Test and Evan’s Analogy Problems.

Research Question. Can we classify the problems in Cattell’s Culture Fair Test, Evan’s Analogy Problems and Raven’s Matrices Tests? How good can the functional complexity measure explain the results?

Method. Formal analysis; cognitive complexity measure; empirical analysis

Results. All problem instances can be classified into three general classes of problems. A functional complexity measure, applied to Cattell’s Culture Fair Test and Evans Analogy problems, produces for the latter satisfactorily results (correctness: r = −.62, p < .01; solution time: r = -.67, p < .0001).

Ragni, M., Stahl, P., & Fangmeier, T. (2011). Cognitive Complexity in Matrix Reasoning Tasks. In B. Kokinov, A. Karmiloff-Smith, & N. J. Nersessian (Eds.), Proceedings of the European Conference on Cognitive Science. Sofia: NBU Press.

Complexity Analogical Reasoning

Positioning. The article, published at the European Conference on Artificial Intelligence (ECAI) in 2010, aims to capture human reasoning difficulty in Raven’s reasoning problems.

Research Question. Is it possible to introduce a complexity measure for Cattell’s Culture Fair Test problems wrt. the kind of underlying functions necessary to solve such tasks?

Method. Formal analysis and cognitive modeling.

Results. The investigations lead to the implementation of a Python program, which is able to solve matrix tasks and to evaluate their complexity by a newly developed complexity measure. The predictions of the model is compared with the human performance on solving the item’s from Cattell’s Culture Fair Test and yield to a correlation r = .72, p < .05 without weigths.

Stahl, P., & Ragni, M. (2010). Complexity in Analogy Tasks: An Analysis and Computational Model. In H. Coelho, R. Studer, & M. Wooldridge (Eds.), Proceedings of the 19th European Conference on Artificial Intelligence (Vol. 215, pp. 1041–1042). Amsterdam: IOS Press.

Review of Complex Cognition

Positioning. The article, published in the Journal Cognitive Systems Research, presents an overview and characterization of complex cognition.

Content. Complex cognition addresses research on: “(a) high-level cognitive processes – mainly problem solving, reasoning, and decision making – and their interaction with more basic processes such as perception, learning, motivation and emotion and (b) cognitive processes which take place in a complex, typically dynamic, environment.” The article presents an overview about past and current research and a precision of the definition of complex cognition from both an AI and psychological perspective. There is a great emphasis on the challenges for cognitive systems. Complex cognition goes far beyond simple cognitive processes and requires all possible methods from cognitive science research: from analytical, empirical, to engineering methods. The article finally presents challenges for complex problem solving, dynamic decision making, and finally learning of concepts, skills and strategies.

Schmid, U., Ragni, M., Gonzalez, C., & Funke, J. (2011). The challenge of complexity for cognitive systems. Cognitive Systems Research, 12(3-4), 211-218.

Solving a Complex Cognition Task

Positioning. The article, published in the Journal Cognitive Systems Research, investigates an important complex cognition task: the dynamic stocks and flows task.

Method. Formal analysis; artificial neural networks; cognitive and computational modeling.

Results. The dynamic stock and flow task – an exploration problem – is first formally generalized to a general stock and flow task with an arbitrary number of tanks that have to be controlled for. As the system requires learning and a highadaptability it proposes to use a computational approach based on artificial neural networks combined with heuristics. The system is evaluated on all problems from the DSF-challenge and yields satisfactorily results.

Ragni, M., Steffenhagen, F., & Klein, A. (2011). Generalized dynamic stock and flow systems: An AI approach. Cognitive Systems Research, 12(3-4), 309-320.

Cognitive Complexity Measure Planning

Positioning. The article, published at the cognitive science conference 2011, investigates if it is possible to find a complexity measure reflecting the human reasoning difficulty in solving planning problems such as Rush Hour.

Method. Formal analysis; psychological experiment

Results. A measure of cognitive complexity must take operational aspects of human information processing into account. The article proposes a structural complexity measure that is based on the number and connectedness of subgoals necessary to solve a problem. It discusses several measures for assessing the goodness of a solution. The best fitting formula correlates for about r = .77 (p < .0001).

Ragni, M., Steffenhagen, F., & Fangmeier, T. (2011). A Structural Complexity Measure for Predicting Human Planning Performance. In L. Carlson, C. Hoelscher, & T. Shipley (Eds.), Proceedings of the 33rd Annual Conference of the Cognitive Science Society (pp. 353–2358). Austin, TX: Cognitive Science Society.

Dependency Calculus

Positioning. The article, published at IJCAI 2005 and an extended version at the German Conference on Artificial Intelligence in 2005, proposes a calculus for causal reasoning.

Research Question. Can we develop a relational calculus for causal reasoning? What is the computatonal complexity of the satisfiability problem and what are tractable subclasses?

Method. Knowledge representation and reasoning; complexity analysis

Results. Mapping the calculus on RCC-5 allows to transfer complexity results and the tractable subclasses.

Ragni, M., & Scivos, A. (2005a). The dependency calculus: Reasoning in a General Point Relation Algebra. In U. Furbach (Ed.), Advances in Artificial Intelligence: Pro- ceedings of the 28th Annual German Conference on AI (pp. 49–63). Berlin: Springer.

Qualitative Reasoning and Robotics

Positioning. The article, published in the Journal of Visual Languages and Computing, investigates a calculus with relative orientation, i.e., egocentric views in contrast to orientations that refer to a global reference system.

Method. Formal analysis; application to robotics.

Results. Reasoning about relative orientations poses additional difficulties compared to reasoning about orientations in an absolute reference frame. A complexity analysis of the ternary calculus reveals that it is in PSPACE; it utilizes finer distinctions than previously published calculi. Additionally, it permits differentiations which are useful in realistic application scenarios such as robot navigation that cannot be directly dealt with in coarser calculi.


Moratz, R., & Ragni, M. (2008). Qualitative spatial reasoning about relative point position. Journal of Visual Languages & Computing, 19(1), 75–98.