THE GROWING PROPENSITY of government hackers to reuse code and computers from rival nations is undermining the integrity of hacking investigations and calling into question how online attacks are attributed, according to researchers from Kaspersky Lab.
In a paper set for release today at the Virus Bulletin digital security conference in Madrid, the researchers highlight cases in which they’ve seen hackers acting on behalf of nation-states stealing tools and hijacking infrastructure previously used by hackers of other nation-states. Investigators need to watch out for signs of this or risk tracing attacks to the wrong perpetrators, the researchers said.
Threat researchers have built an industry on identifying and profiling hacking groups in order to understand their methods, anticipate future moves, and develop methods for battling them. They often attribute attacks by “clustering” malicious files, IP addresses, and servers that get reused across hacking operations, knowing that threat actors use the same code and infrastructure repeatedly to save time and effort. So when researchers see the same encryption algorithms and digital certificates reused in various attacks, for example, they tend to assume the attacks were perpetrated by the same group. But that’s not necessarily the case.
Getting this information right is particularly important because the investigations of security companies are increasingly playing a role in government attribution of hacking attacks. The attacks last year on the Democratic National Committee, for example, were attributed to hacking groups associated with Russian intelligence based in part on analysis done by the private security firm CrowdStrike, which found that tools and techniques used in the DNC network matched those used in previous attacks attributed to Russian intelligence groups.
Although the Kaspersky researchers believe the DNC attribution is correct, they say researchers need to be more cautious about assuming that when the same tools and techniques are being used, the same actors are using them.
Intelligence agencies and military hackers are uniquely positioned to trick researchers through code and tool reuse because of something they do called fourth-party collection. Fourth-party collection can encompass a number of activities, including hacking the machine of a victim that other hackers have already breached and collecting intelligence about the hackers on that machine by stealing their tools. It can also involve hacking the servers the hackers use to launch their assaults. These machines sometimes store the arsenal of malicious tools and even source code that the attackers use for their attacks. Once the other group’s tools and source code are stolen, it’s easy to go a step further and reuse them.
“Agency A could steal another agency’s source code and leverage it as their own. Clustering and attribution in this case begin to fray,” wrote Juan Andrés Guerrero-Saade, principal security researcher with Kaspersky, and his colleague, Costin Raiu, who leads Kaspersky’s global research and analysis team.
“[O]ur point in the paper was: This is what it would look like [if someone were to do a false-flag operation] … and these are the cases where we’ve seen people trying and failing,” said Guerrero-Saade.
The recent WannaCry ransomware outbreak is an obvious example of malware theft and reuse. Last year, a mysterious group known as the Shadow Brokers stole a cache of hacking tools that belonged to the National Security Agency and posted them online months later. One of the tools — a so-called zero-day exploit, targeting a previously unknown vulnerability — was repurposed by the hackers behind WannaCry to spread their attack. In this case, it was easy to make a connection between the theft of the NSA code and its reuse with WannaCry, because the original theft was well-publicized. But other cases of theft and reuse won’t likely be so obvious, leaving researchers in the dark about who is really conducting an attack.
“[I]f a superpower … were to break fully into, let’s say, the DarkHotel group tomorrow and steal all of their code and have access to all of their [command-and-control infrastructure], we’re not going to find out about that monumental event,” Guerrero-Saade told The Intercept, referring to a hacker group that has conducted a series of sophisticated attacks against guests in luxury hotels.“At that point, they’re in a position to mimic those operations to a T … without anyone knowing.”
Spying on Spies
IT’S NO SECRET that spies spy on spies. Documents provided by NSA whistleblower Edward Snowden describe how the agency and its spying partners routinely inspect the machines of victims they hack to see if other hackers are lurking inside. The NSA has a custom anti-virus-like tool called ReplicantFarm that it deploys on systems it hacks to check for the presence of other known actors. The agency does this to ensure that the tools of other actors won’t interfere with its own operations in the machine, and to study the tools and methods of these actors to inform both offensive operations and its defense of U.S. government networks. The NSA will grab these tools and reverse-engineer them — sometimes copying clever techniques for use in its own operations.
Though copying techniques is common for the NSA, two former NSA hackers tell The Intercept they never saw the agency reuse actual code during their time there and say they doubt the agency would conduct a false-flag operation.
“When we catch foreign-actor tools, we’ll steal the techniques themselves,” one of the sources told The Intercept. But “there are a host of issues when you falsely attribute. … You could start a war that way. It’s probably more prevalent in other countries, but in the U.S. … the goal is usually to be not attributed, not falsely attributed.”
He said if any U.S. government entity conducts false-flag operations, it would likely be the CIA.
An ex-CIA official told The Intercept that his former employer does engage in a form of cyber false-flag operations to hide its identity, but not to throw blame on someone else.
Guerrero-Saade believes the NSA and its partner countries in the “Five Eyes” alliance of the U.S., the U.K., Canada, Australia, and New Zealand are likely not the problem when it comes to code and infrastructure reuse for false-flag operations.
“The Five Eyes in general is restrained,” he said. “[But] we all know that the Chinese groups, the presumably Israeli groups, even the [Russian-speaking] groups, are willing to do whatever. The Israeli groups are … ballsy to the extreme.”
False Flags in the Wild
THE KASPERSKY RESEARCHERS have seen several examples in the wild where the infrastructure of one nation-state threat actor has been compromised by another. One example involved a backdoor that had been installed on a staging and relay server (used for transmitting stolen data) used by a nation-state group, known as NetTraveler. Kaspersky doesn’t know who installed the backdoor, but it likely had been installed to monitor the group or steal its tools and the data NetTraveler had stolen from victims.
Another case involved one threat actor hijacking another’s infrastructure to secretly launch its own attacks. DarkHotel — believed to be a South Korean operation — routinely compromises websites to launch attacks against Chinese targets. One of these compromised sites turned out to be hosting exploit scripts apparently belonging to another group, which Kaspersky calls ScarCruft. This group used the site to then launch its own attacks against Russian, Chinese, and South Korean targets.
In yet another case, Kaspersky researchers found a trojan backdoor called Decafett that appeared to be tied to the Lazarus and BlueNoroff groups — both believed to be connected to North Korea — while also using an obscure and unusual Dynamic DNS provider that previously had only been used by the DarkHotel group.
And a mysterious hacking group known as TigerMilk used a digital certificate in attacks on the Peruvian military and government institutions that had famously been used in the Stuxnet assault on Iran’s nuclear program. The authors of Stuxnet, believed to be the NSA and Israeli intelligence, had signed their malicious code with a digital certificate stolen from a company in Taiwan to trick the machines in Iran into thinking their malicious files were legitimate software from this company.
TigerMilk used the same certificate to sign its malicious files, but it did so long after the certificate had gained infamy in the Stuxnet attack and long after the certificate had been revoked and made invalid. Because an invalid certificate is of little use to attackers to help them get malicious code on to a machine, the Kaspersky researchers surmised that the only reason TigerMilk used it in their attacks was “to fool incident responders and researchers into casting blame on the notorious Stuxnet team.”
It might seem that all attribution becomes suspect if hacking groups steal and reuse the tools of adversaries in order to cast suspicion on them. But the Kaspersky researchers said that true false-flag operations are rare and difficult to pull off. All of these examples they uncovered involved fairly unsophisticated attempts to confuse researchers. To effectively pin an attack on another player, a hacking team has to convincingly copy or mimic all of the other group’s tactics, techniques, and procedures, not just some of them. Make one mistake and the illusion can collapse.
“[I]n order to claim an amazing, true … false-flag operation where you genuinely embodied somebody else in every possible way, you have to know how they function in every possible way … how they act when they’re in the victim boxes, [you need to use] the source code and the payloads and the same encryption, the same command-and-control infrastructure — because any anomalies are the things we latch onto and say, OK that doesn’t look right,” said Guerrero-Saade
Guerrero-Saade pointed to a recent case in which the research community was tracking what appeared to be a previously unknown Russian-speaking nation-state group. For six to eight months, researchers watched the group carefully and suspected it might actually be a well-known Russian-speaking group named Turla — except the code and infrastructure the group used was very different from Turla’s. The researchers were almost ready to conclude that it was in fact a completely new and different group, until the attackers slipped up and used one of Turla’s old and well-known tools. Instead of an entirely new Russian-speaking attack group, the researchers concluded that it was simply Turla using a batch of new tools in an attempt to obscure its identity and activity. The incident provides a good illustration for why threat researchers need to be patient and take their time when doing attribution, since hasty work can lead to false conclusions.
Guerrero-Saade said that in talking about code reuse and false-flag operations, he and Raiu aren’t trying to cast doubt on every investigation that occurs. He thinks false flags are rare and that most attribution that threat researchers do is accurate. But researchers need to rethink the limitations and parameters of what they can know for sure, “so that we know when we’re faltering and so that we know when we’re being tricked and so we can admit when we cannot know. There are situations when we cannot know.”