Sometimes a great offense is much better than a stout defense, especially when it comes to protecting enterprise assets.
This week the advanced technology developers from the Intelligence Advance Research Projects Activity (IARPA) office put out a Request For Information about how to best develop better denial and deception technologies – such as honeypots or deception servers for example — that would bolster cyber security.
“Adapting deception to support the engagement of cyber adversaries is a concept that has been gaining momentum, although, the current state of research and practice is still immature: many techniques lack rigorous experimental measures of effectiveness, information is insufficient to determine how defensive deception changes attacker behavior or how deception increases the likeliness of early detection of a cyber attack,” IARPA said in a statement.
IARPA laid out some questions it is looking to the security industry to answer:
What are the existing methods for deception to support cyber defense? Provide specific examples (capability names and references) that implement these methods. What are the limitations of these methods? Are these methods fully automated or do they require human operation?
What is/are the main goal(s) of deception activities for the capabilities provided in threat intelligence/observation, deterrence, delay, confuse, misinform, redirect, denial, detection, frustration, etc.)?
What types of deception does the research/capability investigate and employ (e.g., denial through blocking/blacklisting/firewalling, detection, decoys, honey pot/traps/nets, honey tokens/fakes/misrepresentations/forgeries, etc.)?
Where in the cyber kill chain does the research or capabilities focus and where does it have the greatest impact (reconnaissance, weaponize, deliver, exploit, install, command, act, etc.)?
What are the primary target(s) of interest of relevant research/capabilities (e.g., network, data, user spaces, kernel, mobile/wireless, etc.)? Please describe all that apply.
What methods or research exists for influencing cyber attackers? Do any of them leverage game theory or related concepts?
What metrics and evaluation methods do you employ in your research or for your deception capability? How accurate are these methods? What approaches have been used to validate or assess the accuracy and/or usefulness of these methods? What are their strengths and limitations?
What novel methods could be developed/expanded or adapted to improve or replace existing methods for deception to support cyber defense?
What recent or underappreciated publications and technical developments are of critical relevance to the development, improvement, or evaluation of deception for cyber defense?