When a person stands in court, the judge has only one chance to determine the verdict. Double jeopardy prevents anyone from being tried twice for the same crime and so, no matter what comes to light later on, there are no second chances. Even if that person is guilty, they have escaped and are free to cause more damage.
“Security courts” used to abide by the same rule, relying on a conviction paradigm that provided a single point in time to get a conviction right. Blocking and prevention technologies and policy-based controls gave security professionals just one opportunity to pass judgement on files and identify them as either safe or malicious. During a time when threats were less sophisticated and less stealthy than today, these defences were mostly acceptable. But attacks have evolved and relying exclusively on point-in-time defences is not a good idea.
Modern attackers have honed their strategies, frequently using tools that have been developed specifically to circumvent the target’s chosen security infrastructure. They go to great lengths to remain undetected, using technologies and methods that result in almost undetectable indicators of compromise. Once advanced malware, zero-day attacks, and advanced persistent threats (APTs) enter a network, most security professionals have no way to monitor these files and take action when the files later exhibit malicious behaviour.
In order to be effective our security courts must evolve so that security professionals can continue to gather evidence and retry files after the initial acquittal. This requires a security model that combines a big data architecture with a continuous approach to provide protection and visibility along the full attack continuum – from point of entry, through propagation, and post-infection remediation.
One of the innovations this model enables is called retrospection and it provides the ability to continuously monitor files, communication, and process activity against the latest intelligence and advanced algorithms over an extended period of time, not just at an initial point in time. It also offers significant advantages over event-driven data collection or scheduled scans for new data, as it captures attacks as they happen. In effect, unknown, suspicious, and previously deemed ‘innocent’ files can be tried again.
Here’s how it works:
- After initial detection analysis, file retrospection continues to interrogate files over an extended period of time with the latest detection capabilities and collective threat intelligence, allowing for an updated disposition to be rendered and further analysis to be conducted well beyond the initial point-in-time it was first seen.
- Communication retrospection continuously captures communication to and from an endpoint and the associated application and process that initiated or received the communication for added contextual data.
- Similar to file retrospection, process retrospection continuously captures and analyses system process input-output over an extended period of time.
File, communication, and process data is continuously woven together to create a lineage of activity to gain unprecedented insights into an attack as it happens. With this information security professionals can quickly pivot from detection to a full understanding of the scope of the outbreak and take action to head off wider compromises. Protections can be automatically updated so that security professionals can make the right verdict up front to prevent similar, future attacks.
Despite its long history in the criminal courts, double jeopardy has no place in the security courts. Technologies have advanced to the point where security professionals have numerous opportunities to detect and stop attacks. Retrospection is one of the latest techniques that gives security professionals a second chance to deliver the right verdict at the right time.