Create an Account
username: password:
  MemeStreams Logo

Comments on the Wassenaar Arrangement 2013 Plenary Agreements Implementation: Intrusion and Surveillance Items


Picture of Decius
Decius's Pics
My Blog
My Profile
My Audience
My Sources
Send Me a Message

sponsored links

Decius's topics
   Sci-Fi/Fantasy Literature
   Sci-Fi/Fantasy Films
   Electronic Music
  Finance & Accounting
  Tech Industry
  Telecom Industry
  Markets & Investing
Health and Wellness
Home and Garden
Current Events
  War on Terrorism
  Cars and Trucks
Local Information
  United States
   SF Bay Area
    SF Bay Area News
  Nano Tech
  Politics and Law
   Civil Liberties
    Internet Civil Liberties
   Intellectual Property
  Computer Security
  High Tech Developments

support us

Get MemeStreams Stuff!

Comments on the Wassenaar Arrangement 2013 Plenary Agreements Implementation: Intrusion and Surveillance Items
Topic: Miscellaneous 7:57 pm EDT, Jul 20, 2015

Submitted by:
Tom Cross
CTO – Drawbridge Networks

Thank you for opening a public comment period regarding the proposed implementation of export controls on Intrusion items. I am writing because I believe that these regulations may interfere with important work that computer security professionals do to protect the Internet from attacks. Breaches of both government and private sector computer networks are a regular item in the headlines, and they have significant impacts on our economy and our national security. The recently disclosed breach at the Office of Personnel Management that resulted in the loss of security clearance information about millions of Americans is stark example of the problem that we are trying to combat.

The Bureau of Industry and Security (BIS) should exercise caution before taking steps that could make this problem worse than it already is. Export Controls on computer security information can have a chilling effect on important international collaboration, even if that is not intended. Furthermore, it may be difficult to measure the security failures that are the secondary effects of that break down in collaboration.

I am qualified to address this topic because I have professional expertise with both US Export Controls and Computer Security Vulnerability Research. From 2003 to 2012 I worked for Internet Security Systems (ISS), which was acquired by IBM in 2006.

At ISS, I served as an engineering advisor to their export compliance program. I helped the company understand how the software we were building fit into the framework of US Export Controls. In collaboration with our attorneys, I wrote Letters of Explanation to BIS for a number of different Export Classifications and I wrote one Commodities Jurisdiction request to the State Department.

Additionally, as part of my job, I engaged in primary computer security vulnerability research and for some time I managed the organization’s vulnerability research work. I identified vulnerabilities in popular commercial software applications, disclosed those vulnerabilities to the responsible software vendors, and worked with them to fix those issues. I participated in security industry information sharing programs in which technical information about vulnerabilities, and attack tools, is privately shared between information security companies, coordination centers, and the broader software industry. I had access through those programs to more technical detail about certain security vulnerabilities than was ever disclosed to the general public. It was my responsibility to ensure that ISS’s products correctly detected attack activity targeting those vulnerabilities. Those products are used by thousands of organizations around the world to protect their computer networks from attack.

I have broken my comments into four sections:

I. Technical Information about computer security issues that is shared between software vendors, computer security companies, and coordination centers is not necessarily ever disclosed to the public.

BIS has responded to several questions regarding the disclosure of information about vulnerabilities to software vendors and security software companies by explaining that information which is being prepared for public disclosure is not controlled. For example, see the answers to Questions 10 and 19 in the FAQ that BIS published on their website.

It is important for BIS to understand that often, detailed technical information that is provided as a part of a vulnerability disclosure is never shared with the public, that this detailed technical information often includes specific categories of information that BIS says will be controlled under the proposed rule, and that premature public disclosure of this information can and does fuel criminal activity.

I coauthored a paper with a colleague at Microsoft that provides numerous charts showing the timeline of public disclosure of information about different security vulnerabilities, with data about the amount of malicious attack activity targeting those vulnerabilities at different points in time. [1] One need not read our entire paper to get a sense of the impact that public disclosure can have. The first figure in the paper is particularly noteworthy. It comes from a different paper written by researchers at Symantec, [2] and shows the amount of attack activity both before and after disclosure of quite a few different vulnerabilities.

Of course, it is important to disclose some technical details about security vulnerabilities to the public, for the same reason that other kinds of fundamental research are disclosed – to help inform the community of practitioners about the technical facts and enable a discourse to occur about solutions. But, exactly what information to disclose and exactly when to disclose it is often a complex balancing act that is determined on a case by case basis by the specific parties involved in the disclosure. A government policy requiring the public disclosure of certain technical details about vulnerabilities that are being shared across borders will cause the public disclosure of information that otherwise would have been held back, and some of these otherwise unnecessary disclosures will fuel criminal activity.

In answering questions about vulnerability disclosure, BIS has attempted to clarify that technical information about vulnerabilities themselves would not be controlled (Answer to FAQ Question 4). However, the FAQ that BIS published also clarifies that the controls will apply to several categories of information that are important parts of a vulnerability disclosure. In particular, the answer to Question 4 states that “information on how to prepare the exploit for delivery or integrate it into a command and delivery platform” would be controlled. Vulnerability disclosures often include information about how an exploit might be delivered to a target, so that the receiving organization can properly assess the risk associated with the vulnerability and the practicality of an attack.

BIS also states that “technical data to create a controllable exploit that can reliably and predictably defeat protective countermeasures” would be controlled. Vulnerability disclosures often include in depth technical explanations regarding how reliable exploitation can be achieved, including how to defeat countermeasures. There are thousands of security vulnerabilities disclosed every year, and people who work to protect networks have to prioritize the work that they do by focusing on the vulnerabilities that pose the highest risk. Questions about the reliability of an attack play a significant role in that prioritization process.

Microsoft is a company that has a particularly mature vulnerability disclosure process, and several aspects of that process provide examples of the way that this sort of information factors into a vulnerability disclosure. Microsoft has an index that they publish along with every vulnerability that they disclose, called the “Microsoft Exploitability Index,” that indicates to the public how likely they believe exploitation of each vulnerability to be. [3] Microsoft states that this index is determined, in part, through an assessment of “the cost and reliability of building a working exploit for the vulnerability, based on a technical analysis of the vulnerability.” That assessment is often informed by detailed technical information provided by the original vulnerability researcher along side the disclosure. That detailed technical information is not always publicly disclosed, and doing so prematurely can help criminals.

Microsoft also has a specific program that rewards vulnerability researchers with bug bounties in exchange for technical information about bypassing protective countermeasures. [4] Under this program, “qualified mitigation bypass submissions are eligible for payment of up to $100,000 USD.” Technical information about mitigation bypasses is as much a part of vulnerability research as the vulnerabilities themselves.

In addition, BIS wrote in the answer to Question 18 of their FAQ that Exploit Toolkits would be controlled. Security companies often share samples of Exploit Toolkits that are being used by criminals. It is important to test security software against the actual attacks that are happening in the wild, to make certain that those attacks are being correctly detected and blocked by that security software. Prohibiting the sharing of these samples across borders would be extremely disruptive.

BIS’s answers regarding the timing of public disclosure have also been too vague. As the vulnerability disclosure timelines in our paper demonstrate, it can take many months to fix complex vulnerabilities, and longer still for those fixes to be installed broadly enough across the Internet that it becomes relatively safe to publicly disclose detailed technical information about those vulnerabilities without arming attackers by doing so. I’ve personally seen numerous situations where more than a year has elapsed between the initial discovery of a vulnerability and the public disclosure of detailed information about that vulnerability.

During the time window between initial discovery and eventual public disclosure of a vulnerability, that detailed technical information may pass through a lot of hands, including researchers, coordination centers, bug bounty program administrators, employees of the responsible software vendor (who may work in different countries and may be of different nationalities), employees of various information security software companies (who also may be all over the world), etc. Is all of that detailed technical information clear of export controls during the entire time that the vulnerability is being worked on, just because some day, more than a year in the future, there is a desire to publicly disclose it?

The bottom line is that the proposed rules, as they stand, will be extremely disruptive to computer security research, coordination, and remediation, and will have to be considerably more narrow and precise in order to avoid creating problems.


II. The proposed regulation could disrupt the education and development of information security professionals.

One of the primary challenges that we face in protecting computer networks is the small number of truly talented information security professionals available. There are a variety of organizations that offer commercial training classes that play an important role in the development of new information security professionals. These classes often cost thousands of dollars per student for a few days of training. They have small class sizes with a great deal of instructor interaction and lab time.

These classes often teach students how to create controllable, reliable exploits, and how to prepare exploits for delivery, among other things. Every information security professional needs to have some hands on experience with these things, so that they understand exactly what they are and how they work. You simply cannot become proficient at protecting computer networks from attack if you don’t understand how to attack them. If you don’t understand the realities of exploitation on a first hand basis, you aren’t equipped to think about how to interfere with it.

BIS states in the answer to FAQ Question 4 that information about creating reliable exploits and preparing exploits for delivery is controlled “technology.” My understanding is that commercial training classes that involve subject matter that is export controlled “technology” cannot be offered to foreign national students. If that understanding is correct, it could have a very disruptive impact on these classes. The teachers and students of these trainings often cross national borders, because there are so few people in the world who are qualified to teach these classes at the highest level. My employer once flew me to Germany for the purpose of taking a class on reverse engineering oriented toward computer security researchers, with a set of students from a diverse set of countries.

It may be necessary for BIS to craft a new public disclosure exception, similar to 734.9, which covers commercial training classes that are not offered in an academic setting.

III. Computer security professionals need to be able to travel outside of the country with their personal laptops and cellular phones without fearing that they may have violated the law by doing so.

The Supplementary Information for the Proposed Rule states, in the context of ECCN 4A005 and 4D004, that “No license exceptions would be available for these items, except certain provisions of License Exception GOV.” Presumably, this means that license exception BAG (740.14) will not be available. There are thousands of people who work in information security who have software on their laptops that would be controlled under the proposed rule, including, for example, commercial penetration testing tools, as well as code that has not been publicly disclosed. If there is no license exception for temporary export of personal items, these people will face prosecution every time they leave the country with their laptops. That is an unreasonable burden to place on all of these people, and it will have no demonstrable human rights benefit. License Exemption BAG should apply to all of these items as well as the associated “technology.”

IV. The Wassenaar approach to controlling “intrusion software” related items is fundamentally flawed. Foreign implementation may harm US interests regardless of how the US decides to implement it.

BIS has issued inconsistent statements about the applicability of the proposed “technology” controls to vulnerability information. The Federal Register states that “Technology for the development of intrusion software includes proprietary research on the vulnerabilities.... of computers and network-capable devices.” However, the answer to Question 4 of the FAQ states that “The proposed rule would not control... Information about the vulnerability, including causes of the vulnerability.” The real truth here is that the answer to this question is not clear, different countries may have different interpretations of the rule, and the consequences are significant.

The US Software industry depends upon the open flow of information about security vulnerabilities, exploitation techniques, and samples of attack tools from security researchers all over the world. If, in trying to implement Wassenaar, other countries prohibit or deter their security researchers from sharing important information with Americans and American companies about security issues, that will create risks for anyone who uses software developed here.

The Wassenaar negotiators should have engaged in broader outreach within in the information security world before reaching an agreement about what controls to put in place. Now that they’ve crafted a rule, I feel that there is a desire in some quarters to come up with a “quick fix” or interpretive approach that will allow the US government to proceed with enforcing it, without any negative consequences. It is not clear to me that this is possible.

I don’t like the idea of US companies providing offensive computer intrusion tools to the militaries, intelligence agencies, and domestic police forces of foreign countries that do not share American principles regarding the right to individual privacy and freedom of speech. However, because of the risk that regulating this activity poses for information security, if we’re going to craft an entirely new set of rules, they should crafted in the most narrow fashion possible, and then adjusted later if they aren’t working in practice.

Its not clear to me that there are significant problems with the diversion of these technologies from civilian to military use. I don’t think Governments buy offensive intrusion software frameworks from companies like HackingTeam because they don’t know how to make that software themselves and can’t figure out how. They buy intrusion software frameworks from software companies for the same reason that they buy word processing software from software companies, because its cheaper and more expedient to buy off the shelf products than it is to develop something from scratch on your own, even if you know how.

Therefore, I think that a rule which narrowly controlled the commercial sale of intrusion software frameworks that are “specially designed” for a military, intelligence, or police end use and specifically marketed to those end users might be sufficient to address the human rights concern that is being raised here, without impacting any legitimate defensive computer security work. Unfortunately, that is not what Wassenaar agreed to.

Thank you for taking the time to read and consider my input on this important matter. Feel free to email me if you have questions or desire further clarification about any of the points that I have made herein.

Powered By Industrial Memetics