Earlier this month Collin Anderson at Access published a whitepaper on the new Wassenaar controls relating to "intrusion software."
The whitepaper takes the position that the exchange of exploits and vulnerability information across borders is completely outside of the scope of what is controlled by Wassenaar. The whitepaper asserts that :
Exploitation is not concomitant with Intrusion Software nor is vulnerability research necessarily Intrusion Software development.
I'd like to think thats the case, but when I read the Wassenaar text I have trouble reaching the same conclusion. Even if Wassenaar didn't intend to cover vulnerability research, the text they wrote certainly seems to do so. I've come away with the conclusion that the Wassenaar authors may have crafted their policy under an erroneous understanding of how exploitation works.
Wassenaar defines "Intrusion Software" was follows:
"Software" specially designed or modified to avoid detection by 'monitoring tools', or to defeat 'protective countermeasures', of a computer or network-capable device, and performing... the modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.
Lets expand that part of defeating 'protective countermeasures' as those are also defined specifically in the Wassenaar text:
"Software" specially designed or modified to defeat techniques designed to ensure the safe execution of code, such as Data Execution Prevention (DEP), Address Space Layout Randomisation (ASLR) or sandboxing, of a computer or network-capable device, and performing... the modification of the standard execution path of a program or process in order to allow the execution of externally provided instructions.
This seems to be a perfect description of an exploit. In fact, I don't think that I could have written a clearer legal definition for "exploit" if I tried.
An exploit is software that modifies the standard execution path of a program in order to allow the execution of externally provided instructions. These days, most operating systems have countermeasures that are designed to make it difficult to write an exploit. Data Execution Prevention (DEP) and Address Space Layout Randomisation (ASLR) are examples of exploit countermeasures. If you're going to write a successful exploit for a modern operating system in this day and age, you have to contend with and defeat those countermeasures most of the time.
So, most exploits that are being written today meet both of these criteria. They defeat a countermeasure like DEP and then modify the execution path in order to allow for the execution of externally provided instructions. Therefore, most exploits are "Intrusion Software" under this definition.
So how could the Access whitepaper conclude that "Exploitation is not concomitant with Intrusion Software?"
After an extended conversation with Mr. Anderson, the difference in perspective seems to hinge on the words "externally provided instructions." Every exploit comes with code that it executes, and that code is external to the program that is having it's execution path modified. To me, that means that every exploit allows for the execution of "externally provided instructions."
However, Mr. Anderson seems to be saying that Wassenaar didn't intend to implicate THAT code when they wrote "externally provided instructions." In Mr. Anderson's mind, and apparently, in the minds of the Wassenaar negotiators, there seems to be a clear distinction between the sort of code that an exploit executes and a large application program (sometimes referred to as "malware") that is often installed on a host after an exploit executes.
The problem is that there is absolutely no distinction between these things. If the Wassenaar authors imagined that there is a distinction, than they did so in error.
A quick Google image search for the words "exploit POC" turns up a lot of examples of proof of concept exploits posted to the Internet by security researchers, including the one in the figure on the right.
This image shows a program called Ollydbg, which is a popular debugger, attached to a process that is having it's execution path modified by an exploit. In this case, the exploit code is opening the Windows Calculator. Popping up "calc.exe" is a pretty common thing that security researchers do in "proof of concept" exploits, as it demonstrates that the execution path of the targeted program has been successfully modified (bypassing any countermeasures along the way), but it doesn't do anything particularly harmful.
Here is an example of a "shellcode" that executes "calc.exe." Lets call this Example A. This short code snippet includes a character string called "evil" which has a bunch of binary data in it. That binary data is a set of instructions that if executed will launch "calc.exe" on a certain version of Windows. This shellcode is something that a security researcher might include in a "proof of concept" exploit in order to demonstrate that code execution has been successfully obtained. The instructions would be passed to the computer once the execution path of the targeted program or process has been modified.
Here is an example of a much more malicious "shellcode." Lets call it Example B. It adds a new administrator account to the underlying machine, so that the attacker can log into it.
If you compare Example A and Example B, they are pretty similar to each other. I don't see how you could conclude that Example B is a set of "externally provided instructions" within the meaning of Wassenaar but Example A is not. Here is a website with a whole bunch more examples that do different things. Some open up command shells, some run local programs, some modify local system files, some add new administrator accounts, some download other applications. The ones that serve as proofs of concept are interspersed with others that have malicious applications. There is no clear distinction between them.
The fact is that you can do anything malicious that you want with the code that the exploit executes. You don't have to access any secondary set of additional externally provided instructions. Once you modify the execution path of the application, you can do anything that you want to do.
Therefore, like it or not, Exploitation IS, in fact, concomitant with Intrusion Software. Furthermore, Exploitation IS concomitant with vulnerability research, so vulnerability research IS necessarily Intrusion Software development.
That may not be what Wassenaar intended but thats what the rule that they wrote actually means, and the only way to change that, is to change the wording of the rule.
Of course, Intrusion Software isn't controlled by Wassenaar. However, "technology" for the "development" of Intrusion Software IS controlled. I don't really understand what Wassenaar was attempting to accomplish with this technology control. Its OK for me to sell Intrusion Software across borders without an export license but if I call someone on the telephone and give them technical input on how they can develop their own I have to get a license? That doesn't make much sense, frankly. If I'm not allowed to provide them with technical information, but I can sell them software, I'll just sell them software!
Unfortunately, as I wrote in my letter to BIS, this technology control could threaten the international disclosure of security vulnerabilities in software. It could threaten everything that we do behind the scenes to identify and fix vulnerabilities.
Technology controls are comprehensive, because otherwise they are meaningless. Americans are not allowed to call up somebody in Iran and only explain PART of the information that you'd need to develop a nuclear weapon. Providing ANY PART of the technical information that is controlled is a technology export.
Therefore, providing technical information related to ANY aspect of developing Intrusion Software is controlled. Regardless of how you interpret "Intrusion Software," clearly, defeating countermeasures is a part of it, and modifying the execution path of a program is a part of it. So you can't transfer technology related to developing software that defeats countermeasures, and you can't transfer technology related to developing software that modifies the execution paths of programs. Not without a license anyway.
When I call up a software vendor in another country and I tell them that there is a way to defeat a countermeasure in the operating system software that they make or there is a vulnerability in the software that they make that allows me to modify it's execution path, that is exactly what I am doing. I describe to them how to develop software that accomplishes this, and often I provide them with an example "proof-of-concept" program that illustrates it.
Because Internet Security depends on allowing people to do that, this technology control won't fly.
The Access whitepaper concludes that:
It is incumbent that export control authorities refrain from considering broad interpretations of Intrusion Software that might lead to attempts to regulate exploits or vulnerability sales.
I completely agree with that recommendation, and I think that its vital that BIS and other national authorities heed it. However, that recommendation is really equivalent to advising national authorities to largely ignore the text of Wassenaar, particularly with respect to the technology controls. What we really ought to do is go back and amend that text, so that it narrowly describes ONLY the things that Wassenaar actually intends to control.