By Mark AmesThe Post | Oct. 29, 2021 07:18:25A few months after Edward Snowden exposed the NSA’s vast spying operations, the agency was under a barrage of leaks that have left the agency with a new appreciation for the power of information.
But a new paper from researchers at MIT, the University of Michigan and the University to Combat Counterintelligence (TC4C) is shedding light on one of the more significant leaks: the one that led to the NSA getting caught red-handed: The leaks that revealed the existence of a secret NSA tool that could monitor internet traffic for content on a targeted website.
The leak that led the NSA to the existence and use of the tool led to a lot of public outrage, and led to changes in the way the NSA operates, said lead author Adam Segal, a research associate in MIT’s Computer Science and Artificial Intelligence Laboratory.
It also gave rise to a number of legal challenges, including one by former CIA director John Brennan in which the agency sought to block a key provision of the Patriot Act, which made it illegal to disclose classified information.
The NSA also said that it was legally obliged to disclose the tool’s existence in the wake of the Snowden leaks, and that it has since revised its policies and procedures.
The new paper published this week in the Journal of the American Statistical Association is the first comprehensive look at the leaks and their impact.
« We were really surprised to find how important it was for the NSA not to be able to do this kind of stuff, » Segal said.
« The fact that the tool was being used to spy on the internet and to be monitored by people in a foreign country really made the NSA very nervous. »
That’s a good thing, that they didn’t have access to it.
It could have potentially been used against them.
« The tool was designed to detect foreign actors who had installed malware on their websites to interfere with NSA surveillance efforts, and to help the agency identify suspicious content.
It was built with data collected in 2016 by researchers from MIT and the NSA and a small team of researchers at the Center for Strategic and International Studies, a Washington think tank.
The tool can detect the type of content that the NSA would like to target, such as the « top » or « bottom » pages of websites that have particular « tags » — keywords that tell the NSA what the site is about.
The tag system is used by the agency to identify a site that may be being targeted by foreign actors, such the Russian military.
It is not, however, used to track content from the webpages of ordinary Americans.
The NSA says that it does not monitor all the pages on a website that are « top of the stack, » but instead only those that have a large number of « tags, » which include keywords related to the target. »
The document describes the tool in terms of « the type of website that a foreign actor may use, the type and scope of traffic that they may be engaging in, and the extent to which they are actively seeking to communicate with or recruit against US persons or organizations. » »
It does not identify content that is only a small part of a larger site, but is the only part that is being analyzed for foreign intelligence purposes. »
The document describes the tool in terms of « the type of website that a foreign actor may use, the type and scope of traffic that they may be engaging in, and the extent to which they are actively seeking to communicate with or recruit against US persons or organizations. »
The NSA also points out that it can’t actually spy on all websites on the Internet because some of those websites are « tied to the Internet address that the Internet service provider (ISP) has for the domain name. »
In that case, the NSA has to rely on « troubleshooting » that can only be performed by a third-party provider, such Facebook.
The new paper points out the NSA can use this type of metadata, and its ability to spy directly on foreign targets, in the name of national security.
That metadata can be used to identify who is communicating with or recruiting against US people, and also the target’s social media accounts, according to the paper.
It also shows that the agency is relying on « hacking tools » to target foreign targets.
It says it can use « exploitation techniques » that allow it to monitor traffic from a foreign target’s website and then target the target in an automated fashion, which is known as a « spoofing attack. »
The researchers describe a number such attacks, including ones that they have performed in the past, but this is the most sophisticated, the authors said.
The paper describes a number methods that the US government has used to exploit vulnerabilities in its internet infrastructure, including the Stuxnet worm that targeted Iran in 2009, and Flame, a cyberweapon that the government has been working on for several years.
It said the NSA could also use the tools described in the paper to target people in the United States and around the world, as well as « mal