The FoHo, misinformation and cybersecurity

Opinion by Ashwin Ramaswami
Jan. 22, 2020, 1:03 p.m.

The Fountain Hopper (FoHo) sent an email blast last week alerting students that there had been an “UNPRECEDENTED VOLUME OF PHISHING ATTACKS.” Additionally, FoHo claimed to identify “TWO MAJOR VULNERABILITIES THAT LIKELY ENABLED THE ATTACKS” and alleged that “STANFORD KNEW ABOUT EMAIL VULNERABILITY AND NEGLECTED TO FIX IT.”

The FoHo email is deeply misleading. I will describe what happened, how the FoHo misreported it and the general implications of the FoHo’s misrepresentations.

What happened

The FoHo story was written in response to a Dec. 16 report by University IT (UIT) about an “unprecedented volume in phishing attacks” that led to “hundreds of compromised user accounts.” In a phishing attack, an attacker impersonates a trustworthy entity in order to steal sensitive information. For example, an attacker might send you a phishing email that links to a Stanford-like login page; once you enter your password, the attacker gets access to your account.

Two-factor authentication (2FA) is a common mechanism for preventing against phishing attacks, because it requires someone to sign in with Duo or a code texted to your phone in order to access your account. However, 2FA is not always required for access to Stanford email, thus enabling the phishing attacks that just happened. In the report, UIT acknowledged that the University is facing a phishing issue and announced efforts to improve detection of phishing and community awareness of phishing emails. It also announced its plans to close the email 2FA loophole and require 2FA for email access by October 2020.

What the FoHo said

The FoHo claimed to have found “two key email vulnerabilities” in Stanford’s systems that enabled the attacks, and that the University failed to address them. However, both the assumptions made and conclusions drawn about the vulnerabilities by the FoHo are problematic.

With respect to the first vulnerability, the FoHo refers to public access to the LDAP database with students’ emails in it in its correspondence with the University. The email addresses could be accessed using LDAP, or the Lightweight Directory Access Protocol, which is a standardized protocol for storing and accessing directory and profile information. However, it is worth noting that many Stanford emails are accessible publicly on the Stanford Directory website and could already be scraped. Additionally, hundreds of email addresses are publicly available through sites such as the Stanford CS directory or Stanford Athletics staff directory. The University, in fact, has no privacy obligation under FERPA to keep Stanford emails secret. Perhaps for this reason, Stanford emails are classified as “Low Risk” by Stanford’s own data classification standards. Thus, the University concluded in its response to the FoHo that “we have no specific reason to believe that the Canvas phishing campaign was related to our LDAP configuration.” Given that there are several publicly-available alternatives to get emails, compared to the relative obscurity of the LDAP database, I don’t see how the FoHo concluded that this vulnerability “likely enabled” the attack.

It’s not clear if the next vulnerability — that “Stanford IT did not block ‘spoofed’ emails” — is even a preventable vulnerability at all. Email spoofing involves forging the sender address of an email, kind of like someone writing in another person’s address on a snail mail envelope. Because of the technical details of how email works, though, there is no foolproof solution to stop email spoofing in the first place. (There are encrypted email solutions such as Keybase that help to cryptographically verify the sender of an email — but such verification requires both the sender and recipient to switch to encrypted email, making it impractical for large organizations). You simply can’t stop the delivery of a spoofed email; you can only catch and label email as spam after it has been delivered. Email providers like Stanford thus rely on spam filters and other standard Office 365 security techniques for anti-spoofing. In effect, the FoHo makes it sound like fixing email spoofing is as easy as “patch[ing] a vulnerabilit[y],” when it is ultimately an impossible task that requires continual efforts for improvement.

Why it matters

The FoHo correctly identified two problems with Stanford email systems: There is a public list of emails on the LDAP database, and email spoofing is a problem. However, the FoHo misrepresented both issues in a way that makes it sound like Stanford UIT has been grossly negligent in doing its job, or that UIT simply doesn’t care about students’ email security. The FoHo doesn’t consider that the stated vulnerabilities may not actually be feasible to fix, and that Stanford may not even be responsible for the attacks. The FoHo doesn’t consider the steps UIT is already taking, such as improving phishing detection and enabling 2FA on email accounts, which may be a bigger priority than securing access to SUNet IDs, which are low-risk data and publicly accessible anyway. In other words, the FoHo wants us to start “calling B.S.” on the University’s claims, purely because the FoHo says so.

This follows a pattern of misinformation by the Fountain Hopper. In an issue last fall, when the FoHo correctly identified problems with Queer Chart’s user data being exposed, it also listed as vulnerabilities the fact that the website had “easily readable, non-production source code” and that it wasn’t GDPR-compliant. However, readability of source code has nothing to do with a system’s security, and GDPR-compliance is not a cybersecurity vulnerability (and only applies to businesses with 250+ employees doing business in the EU, anyway). Although one could make an argument for fixing these issues, it is simply wrong to call them “sweeping vulnerabilities.”

For the FoHo, it makes sense to magnify everything to major vulnerability status — it fuels more outrage and more readership. But if everything is a major cybersecurity vulnerability, students will not have an understanding of what constitutes and what doesn’t constitute a severe data breach, and will therefore be less prepared to handle a severe data breach when it actually occurs. An additional risk of sensational treatment of breaches in the media will encourage people to report breaches to the press first instead of the actual developers of the system. Although the press may have handled such cases responsibly, such a course of action does face disciplinary, legal and privacy risks. For example, the student who discovered the 2017 GSB data leak released a report that rightly called the University out for lying to students about only offering need-based financial aid. But was such an analysis worth compromising the privacy of the 2,000+ students whose applications were accessed to compile this report and inviting legal repercussions for a potential violation of the Computer Fraud and Abuse Act? What risk was the student exposing himself to if he hadn’t found any evidence of wrongdoing by the university? Such situations involve complex ethical questions, and to answer them requires a nuanced understanding both of the severity of the vulnerability and the tradeoffs regarding the method of disclosure.

The Stanford Bug Bounty Program, in fact, is a much safer way to report a vulnerability both so that it can be fixed as soon as possible and so that the vulnerability reporter is protected from legal action due to the safe harbor provisions of the program. I’ve had the pleasure to work with Mike Takahashi and Ian Redzic, who work on Bug Bounty from UIT. They are responsive and care about the security of Stanford’s systems. Creating the impression that the University doesn’t care about IT security will only decrease the chance that vulnerabilities will be properly reported through Bug Bounty.

What should we be doing?

News outlets should be more circumspect when reporting on vulnerabilities. They should have evidence before declaring that a certain vulnerability is what enabled a cyberattack. They should weigh the severity of vulnerabilities before making conclusions about them. And if someone really finds a severe vulnerability, it may be wisest to first go to the Bug Bounty Program instead of a news outlet.

At the same time, the University should understand the danger of a misinformed press and should be as clear with the press as possible. Perhaps FoHo’s sensational reporting was enabled in part by the fact that the University did not directly answer any of the FoHo’s questions other than the one about the LDAP vulnerability. Nonetheless, the FoHo hardly gave the University any time to reply, sending out 17 questions at 2 a.m. and requiring a response by 11 a.m. (This echoes the 40-minute deadline the FoHo gave Queer Chart when contacting them shortly before publication.)

There are many more steps the University should take to improve email security and vulnerability reports. The Bug Bounty Program is a good first step, but the University should be more proactive about publicizing this program and be more timely in responding to good-faith vulnerability reports. The notification of phishing attacks was only done on Canvas and UIT’s website; it would have been far more effective to use email to reach all students. If UIT doesn’t keep students adequately informed, that’s when the Fountain Hopper will fill in the gap.

There are also more long-term solutions that the University should look into. The training video about email safety and phishing that every new Stanford student is required to watch should be supplemented with a similar training related to vulnerability disclosure. Stanford has begun sending biweekly simulated phishing messages to all employees, but it could extend that to students as well. Computer science majors should take a required course in cybersecurity, which should include at minimum a unit on vulnerability disclosure and the legal dimensions of these topics. The university should be proactive rather than reactive in taking crucial security steps such as enabling 2FA; they only implemented 2FA on accounts after a major 2013 security breach that exposed all SUNet IDs and hashed passwords, and lack of 2FA was also responsible for exacerbating a 2016 tax data breach on a vendor’s system. Additionally, as students manage more computer systems that handle students’ data (such as the Stanford Marriage Pact, Queer Chart, the 2021 Frosh Matcher and TreeHacks), Stanford should work with students to develop a vulnerability disclosure policy that allows for a responsible process for reporting and fixing vulnerabilities in students’ systems. Only then will students’ data be truly safe.

If you care about these issues as well, please fill out this form to share your thoughts with UIT. (This form was also hidden somewhere in some corner of UIT’s website — UIT, take note!)

As cybersecurity becomes more and more important, everyone in the Stanford community needs to work together to keep everyone informed and stay ahead of the game. The University should emphasize clearer communication with the student body on these issues. And while the press has an important role to play in making the technical topics involved more accessible to the student body, it also has an overriding responsibility to focus on informing instead of just sensationalizing.

This op-ed has been corrected to reflect that the FoHo gave Queer Chart a 40-minute deadline when contacting them shortly before publication, not when first contacting them.

Disclosure: I work for Treehacks, but the opinions expressed in this piece are my own.

Contact Ashwin Ramaswami at ashwin99 ‘at’ stanford.edu.

Ashwin Ramaswami '21 was the Chief Technical Officer of the Daily from 2018-19 during Vol 254 and 255 and in 2020 during Vol 257. Contact him at ashwin ‘at’ stanforddaily.com

Login or create an account

Apply to The Daily’s High School Summer Program

Priority deadline is april 14

Days
Hours
Minutes
Seconds