The federal computer crime law prohibits “computer trespass.” This includes both “accessing” a computer without authorization, and “exceeding the scope of authorization” to access a computer. If these terms seem vague and ambiguous to you, well, welcome to the club.
In June of 2021, the U.S. Supreme Court attempted to more clearly define what constitutes “exceeding authorization” to access a computer. In United States v. Van Buren, the court’s majority held that a police officer with lawful access to a criminal database who downloaded data and then used that data in violation of the restriction that the data could only be used “for law enforcement purposes” did not “exceed his authorization” to access the computer that housed that data.
The Gray Area
But what this means for security researchers is not clear. Security researchers—particularly the various shades of “gray hat” researchers—engage in various forms of testing, probing and exposing vulnerabilities in hardware, software, networks, routers and configurations. This may be done without the consent or permission of the software developer, the network owner or others—and may even be done in direct contravention of the wishes of the network owner or software developer.
At one end of the spectrum is the “white hat” hacker—they have the knowledge and consent of the “owner” and all interested parties, they operate within parameters set by the owner, they provide information about the vulnerabilities discovered only as provided by contract and they stick to precisely what they are asked to do. On the other end of the spectrum is the “black hat” hacker. They not only look for vulnerabilities but exploit them (or sell the exploits) in a way that is a crime. In the middle are “gray hat” hackers. They may be probing just the publicly exposed configurations or networks. They may download and dissect software for actual or theoretical vulnerabilities. They may throw exploits at networks to see if they work (not to break in, per se, but to discover the status of security). They may do these things under an express “bug bounty” program; they may do it without such a program or they may do this in excess of the terms of a bug bounty program. They may be “responsible” gray hat hackers or “irresponsible” hackers. They may damage networks or leave them unscathed. But in many cases, this broad category of “security researchers” expose things that many would rather remain unexposed. They reveal that companies may be breaching their obligations to “reasonably” protect or encrypt data. They embarrass CISO’s and sometimes get them fired. And, in some cases, they get prosecuted.
For example, Georgia security researcher Scott Moulten was both sued and prosecuted for conducting a port scan on a government computer as a prerequisite for permitting that network to access another network for which he was hired to act as a security researcher. Security researcher Bret McDanel was prosecuted when he publicly revealed the fact that a supposedly “secure” email system provided by his former employer was—well, not so secure.
Stefan Puffer was prosecuted for demonstrating to a reporter that the Harris County Texas clerk’s office had open Wi-Fi connections which would have permitted access to records (though he did not access any such records).
Similarly, Chris Kubecka, an aerospace security researcher, was threatened with prosecution by lawyers for Boeing for exposing security vulnerabilities on a plane. Security researcher John Jackson was similarly threatened for exposing vulnerabilities in the online therapy application TalkSpace. David Levin, a security researcher in Florida, was arrested for demonstrating the fact that Lee County elections computers were vulnerable to a SQL injection attack. The creator of an online vulnerability scanner, critical.io, HD Moore, was threatened with prosecution for running the tool. Andrew “weev” Auernheimer was prosecuted for exposing vulnerabilities in AT&T’s applications. And MIT researchers who exposed vulnerabilities in the Boston subway’s “Charlie Card” were arrested at a DEFCon conference. Finally, there is the case of Aaron Swartz, the Harvard/MIT security researcher who was prosecuted for attempting to download publicly accessible documents (JStor records) from MIT, and who ultimately committed suicide.
Exceeding the Scope
It’s not quite clear what the scope of the Supreme Court’s ruling was in Van Buren. The high court speaks of things like “gates up” or “gates down”—permission to access granted or permission to access denied. If granted, the court suggests that merely doing something you are not specifically authorized to do—while in a “place” you are authorized to be—is not “exceeding the scope of authorization.” But it might depend on the “thing you are doing” while in the authorized place. If you are authorized to use your company’s computer, and use that access to, for example, delete all the files in the HR department, deliberately install bitlocker software or download and publish customers’ personal information, have you “exceeded the scope” of your authorization to access (use) the computer? Under the Van Buren test, the answer is likely, “no,” although you may run afoul of other provisions of the criminal law (including the transmission of harmful code provisions of the CFAA itself). You are permitted to be where you are, and that’s the focus of the “trespass” provisions—not what you do once you are there.
But if you are “authorized” to be in one “place” (and there ain’t no “place” in cyberspace) and go somewhere you aren’t allowed to go, that’s “exceeding the scope” of your authorization. But even that one is fuzzy. Does the computer owner have to put some kind of code or technical barrier in place to keep you out? In a footnote, (note 8) the court essentially punts on that, stating that it isn’t going to decide the issue at this time. What about things like URL truncations? A publicly accessible URL is modified to permit access to a “publicly accessible” account.
Say the URL is something like http://megabank.com/accounts/accountnumber1234— that gives you access to your files and account. You change the last digits from “1234” to “5678” which gives you access to someone else’s data. Bad security. Bad. But does the fact that something can be seen mean that you have “authorization” to see it? Or, take a Wi-Fi router, for example. If I set up an SSID called “Free Wi-Fi” and don’t set a password, it seems that anyone who uses that connection is “authorized” (though they proceed somewhat at their own risk). But am I then “authorized” to access their computer? And what if I have a router with a very simple password—you know, like “Password”—or if I kept the manufacturer’s default password—you know, “Password?” Is a user “authorized” to use that Wi-Fi connection? And what if the user runs a simple dictionary crack? The problem lies in confusing the concepts of “being able to get in” with “being authorized to get in.” At one end of the spectrum, the data or network is, in fact, public. At the other end, it’s just not secure. Is it the difference between an open door and a poorly constructed lock? Magic 8 Ball says—situation unclear, ask again later.
The high court also seemed to address the issue of exceeding the scope of authorization by accessing a part of a network or database you are not permitted to go to—not just doing something you aren’t allowed to do. If you are an employee of ABC company, with authorization to read your email and access your files, your access to the HR department or the CEO’s data would be “exceeding the scope” of authorization (depending on who you are and why you are accessing)—assuming that there were sufficient notices that such access was not permitted. But the latter requirement is necessary because the law prohibits “intentionally” accessing or exceeding authorization to access—something that requires proof of knowledge that the access is not permitted.
Also not addressed by the court is the concept of “necessity” or, to a lesser extent, “good purpose” or “good motive”—the latter of which are not really defenses to a trespassing charge. In the “real” world, if you go to your neighbor’s house and walk in the unlocked door, you might—or might not—be trespassing. If you are just curious, snooping or worse, then yes, it’s a crime. If, on the other hand, you are checking to see if they are OK or heard a strange noise, even though the neighbor did not “authorize” you to come in, society as a whole would find what you did permissible (assuming your neighbor doesn’t shoot you). And if there’s a fire in your neighbor’s house, you can kick in the locked door. That’s the necessity defense.
But can you use this kind of defense to say, “Hey, the reason I was jiggling the lock was because I know you have young kids, and there’s been a lot of crime in the neighborhood lately. I wanted to make sure your kids were safe?” Probably not. This comes under the broad rubric of “Who died and made you G-d?” Yet, much of security research is based on this premise. In addition, I may be probing your data or your network to find out if my data or my network is secure. Vulnerabilities may exist in the aggregate. Collecting data from multiple sources online may be necessary to figure out whether vulnerabilities exist and how widespread or significant they are.
The Van Buren case left many aspects of the CFAA unaddressed. Security researchers, then, remain in a twilight zone. The best advice, for now, is for researchers to attempt to adhere to many of the responsible practices, and to adhere to the terms of “bug bounty” programs where they exist, and to seek legal counsel before doing anything “squirrely” (a legal term.) The law in this area is evolving, and we may never have a definitive answer. The law of trespass, however, has existed for centuries, and I still can’t tell you if I am legally allowed to open my neighbor’s car door to turn off their headlights. And you thought there were answers.