Centre for Information Security and Cryptography
An important part of Information Security is testing all systems that work with certain information for potential security problems. A particular challenge for testing methods is so-called emergent behaviour between systems that are not intended by the developers of these systems (usually not even imagined), but that can occur under certain circumstances (that are not known by developers or testers).
Our research in this area uses concepts from machine learning and multi-agent systems to search for unwanted emergent behaviour that poses a security problem for one or several systems. Based on a high-level description of an unwanted behavior (like "User with these access rights should not access some particular information"), a learning system interacts with the system to be tested, posing as users and/or other systems, analyzing the behavior of the tested system, and determining how close this behavior is to the unwanted behavior. It then creates new interaction strategies that have a chance to bring the tested system nearer to the unwanted behavior. This is repeated until the unwanted behavior is revealed.