Exp 14: Structural Analysis of fluid IQ-test problems

iqtaskstypes

The experiment written in Flash examines different kinds of problem structures for geometric problems similarly used in the Raven’s Advanced Matrices or Culture Fair Test. The problems have been systematically varied regarding the combination of problem functions.

Link: http://imodspace.iig.uni-freiburg.de/misc/iqcoach_typetest/iqcoach.html

Logfiles: 

Tasks: 

IQCoach.de

iqcoach

The website IQCoach.de offers a huge database of geometrical problems which are similar to such used in intelligence tests. Users of the website can train their problem solving strategies by performing different types of task and tests. All tasks are generated according to a database of different problem principles, which were collected from an analysis of different tests in the literature. All tasks are characterized according to their problem structure and stimuli. This allows users to train individually specific types of tasks.

The website was realized by a Karl Steinbuch scholarship.

The URL of the Website is: https://www.iqcoach.de

Solver IQ Tasks

iq_solver

The program allows the user to design arbitrary geometrical tasks like such used in the Raven or CFT3 intelligence tests. The tasks are solved by the program and their complexity for human subjects is evaluated. The complexity measure is based on an analysis of the empricial data from the CFT3.

To run the program Python2.7 and PyGTK is required.

 

Raven Model

Human reasoners have an impressive ability to solve analogical reasoning problems and they still outperform computational systems. Analogical reasoning is relevant in dealing with intelligence tests. There are two kinds of approaches: to solve IQ-test problems in a way similar to humans (i.e., a cognitive approach) or to solve these problems optimally (i.e., the AI approach). Most systems can be associated with one of these approaches. Detailed systems solving geometrical intelligence tests, explaining cognitive operations based on working memory and producing precise predictions and results such as error rates and response times have not been developed so far. We present a system implemented in the cognitive architecture ACT-R, able to solve analogously developed problems of Raven’s Standard and Advanced Progressive Matrices. The model solves 66 of the 72 tested problems of both tests. The model’s predicted error rates correlate to human performance with r = .8 for the Advanced Progressive Matrices and r = .7 for all problems together.

Complexity IQ-tests

Positioning. The article, published at the European Conference on Cognitive Science, characterizes different types of problems in Cattell’s Culture Fair Test and Evan’s Analogy Problems.

Research Question. Can we classify the problems in Cattell’s Culture Fair Test, Evan’s Analogy Problems and Raven’s Matrices Tests? How good can the functional complexity measure explain the results?

Method. Formal analysis; cognitive complexity measure; empirical analysis

Results. All problem instances can be classified into three general classes of problems. A functional complexity measure, applied to Cattell’s Culture Fair Test and Evans Analogy problems, produces for the latter satisfactorily results (correctness: r = −.62, p < .01; solution time: r = -.67, p < .0001).

Ragni, M., Stahl, P., & Fangmeier, T. (2011). Cognitive Complexity in Matrix Reasoning Tasks. In B. Kokinov, A. Karmiloff-Smith, & N. J. Nersessian (Eds.), Proceedings of the European Conference on Cognitive Science. Sofia: NBU Press.

ACT-R model of IQ-tests

Positioning. The article, published at the European Conference on Artificial Intelligence 2012, presents an ACT-R model that is able to solve Raven’s Matrix Test.

Research Question. Is it possible to explain the limitations in human reasoning in IQ- tests based on working memory limitations?

Method. Cognitive modeling; formal analysis

Results. The cognitive model in ACT-R, is able to solve analogously developed problems of Raven’s Standard and Advanced Progressive Matrices. The model solves 66 of the 72 tested problems of both tests. The model’s predicted error rates correlate to human performance with r = .8 for the Advanced Progressive Matrices and r = .7 for all problems together.

Ragni, M., & Neubert, S. (2012). Solving Raven’s IQ-tests: An AI and Cognitive Modeling Approach. In L. D. Raedt et al. (Eds.), Proceedings of the 20th European Conference on Artificial Intelligence (Vol. 242, pp. 666–671). Amsterdam: IOS Press.

Complexity Analogical Reasoning

Positioning. The article, published at the European Conference on Artificial Intelligence (ECAI) in 2010, aims to capture human reasoning difficulty in Raven’s reasoning problems.

Research Question. Is it possible to introduce a complexity measure for Cattell’s Culture Fair Test problems wrt. the kind of underlying functions necessary to solve such tasks?

Method. Formal analysis and cognitive modeling.

Results. The investigations lead to the implementation of a Python program, which is able to solve matrix tasks and to evaluate their complexity by a newly developed complexity measure. The predictions of the model is compared with the human performance on solving the item’s from Cattell’s Culture Fair Test and yield to a correlation r = .72, p < .05 without weigths.

Stahl, P., & Ragni, M. (2010). Complexity in Analogy Tasks: An Analysis and Computational Model. In H. Coelho, R. Studer, & M. Wooldridge (Eds.), Proceedings of the 19th European Conference on Artificial Intelligence (Vol. 215, pp. 1041–1042). Amsterdam: IOS Press.

Dynamic Learning Approach for Solving Number Series

Positioning. The article, published as a book chapter in the Proceedings of the German Conference on Artificial Intelligence 2011, proposes to use Artificial Neural Networks to solve number series problems.

Research Question. Is it possible to develop a cognitive system able to solve number series problems of intelligence test or the 50 000 problems in the Online Encyclopedia of Integer Series?

Method. Dynamic training method for Artificial Neural Network

Results. Using a dynamic learning approach the approach can solve 26 951 of 57 524 number series.

Ragni, M., & Klein, A. (2011). Predicting numbers: An AI approach to solving number series. In S. Edelkamp & J. Bach (Eds.), KI-2011. Springer LNAI, Heidelberg.

Solving Number Series and Artificial Neural Networks

Positioning. The article, published at the International Joint Conference on Computational Intelligence (IJCI 2011), investigates if we can solve classical intelligence test problems and what parameters fit the best.

Research Question. What kind of architectural and formal properties can have an influence on successful artificial neural networks solving number series?

Method. Formal analysis of artificial neural networks

Results. Systematically testing the best parameters (number of input nodes, hidden nodes, and learning rate) shows that the structure of the Artificial Neural Networks can determine the success of solving a number sequence: 2-4 input nodes and about 5-6 hidden nodes provide the best framework to solve number series. By allowing approximations (deviations of ± 5) be improved to solve about 39 000 number series of the Online Encyclopedia.

Ragni, M., & Klein, A. (2011b). Solving Number Series – Architectural Properties of Successful Artificial Neural Networks. In K. Madani, J. Kacprzyk, & J. Filipe (Eds.), NCTA 2011 –  Proceedings of the International Conference on Neural Computation Theory and Applications(pp. 224–229). SciTePress.