[ad_1]
SecurityWeek continues its Hacker Conversations series on leading cybersecurity researchers in a discussion with Project Zero’s Natalie Silvanovich.
Natalie Silvanovich is a member of Project Zero – an elite group of researchers employed by Google. “Our key mission,” she told SecurityWeek, “is to make the zero-day difficult. Basically, we focus on the problem of zero-day vulnerabilities being used by attackers in the wild, and we try to solve the problem in a number of ways.”
Project Zero researchers seek out the type of vulnerabilities that targeted attackers would use, and then encourage and help vendors to fix them. “We also spend a lot of time understanding and writing about the critical vulnerabilities that are used in zero days. This information is available to everyone, and we work with vendors to try and improve their software so that vulnerabilities are hopefully less available and more expensive for attackers to use in the wild.”
Project Zero is a team of just over a dozen people. It is a distributed team with offices in Zurich and Mountain View, and some remote workers. Its members work both in isolation and in collaboration. “I wish we could collaborate more,” Silvanovich told SecurityWeek. “Most of the projects that I work on are projects that are just me, but we have done a few large collaborative projects too. A couple of years ago, I did a large iMessage project with Samuel Groß. And I’ve done a few others, but mostly we work individually or in teams of two or three on our projects.”
Disclosure of discovered vulnerabilities can be a contentious issue. Researchers who get their income from bug bounty programs are generally constrained by the bounty program’s own disclosure rules. Silvanovich is a Google employee and therefore governed by Google rules. If she finds a vulnerability, she will immediately report it to the vendor.
“Right now, we have a 90-day policy,” she explained. “So, vendors have 90 days to fix the bug, otherwise, we’ll disclose it publicly. Even if they fix the bug, we give them 30 more days, before we disclose it. We’ve had a lot of success with this policy. It’s very rare for a vendor to not fix a bug in that timeline – but even with Project Zero we occasionally find vendors that don’t take us seriously.” The one deviation from this policy is the discovery of a vulnerability that is already being exploited in the wild. Under these circumstances, the period of grace is reduced to seven days.
A common view among many researchers is that the profession chooses them rather than they choose the profession. This didn’t happen with Silvanovich. She followed a traditional career path albeit with a perhaps nontraditional destination. “When I was at university, we had something called co-ops (akin to internships in the US). I saw an opportunity for a ‘junior hacker’. It sounded cool, so I applied, and I got it.”
Although she had been interested in computers, this was her first introduction to cybersecurity. Other researchers we have spoken to started at an early age by tinkering with the family computer. They often ignored or dropped out of a university education preferring to teach themselves – but Silvanovich’s university experience was critical to her evolution as a researcher.
“I don’t think I would have known about this area, or explored it, if I had a different internship at university. I was interested in computers, but I also had a lot of other interests. Without that internship, I probably would have pursued a different career. But I would also say that I have a somewhat relevant degree with my education – in Electrical Engineering from the University of British Columbia. You know, there’s lots of stuff I learned that I don’t use, but there’s a lot of stuff that I do use.”
Between the degree and the internship, she learned how to program, she learned how electronics and mobile devices work (the internship was with Blackberry). “And even strange stuff,” she added. “For example, in my degree course, we took a lot of different types of math. As a researcher, I’ve spent a lot of time looking at Adobe Flash, which includes image processing algorithms. Because of my degree, I felt I understood what the software was trying to do. My university education helped me get into this area, but has also given me some of the skills that I still find useful in my career.”
Learning how to program at university leads to an interesting question: would a researcher who understands so much about code and how it works, make a good programmer? “I think I’m a ‘good programmer’,” she said, “because I can solve problems and I can understand the code. But if you talk to someone at Google… they do not think I’m a good programmer. Because I don’t know how to follow a process; I don’t get my code reviews; I don’t use the right variable names. So, I do think that commercial programming, especially on a wide scale, requires a lot of skills that a security researcher doesn’t necessarily have.”
It’s almost as if there’s an element of the buccaneer to the researcher – strict adherence to the rules takes a back seat to getting to the solution. Researchers and coders may have similar technical skills but have very different personalities – and while a coder could become a researcher, a researcher may struggle to become a professional coder.
But clearly, Silvanovich loves her work. Asked which discovery has given her the most pleasure, she replied, “The most recent.”
Like all the other researchers we have talked to, Silvanovich puts ‘curiosity’ as the number one characteristic that every researcher must possess. But to this, she adds, dedication, resilience and plain stubbornness. “You spend a really long time looking for vulnerabilities,” she explained, “and sometimes you just don’t find them.”
There’s an interesting side question here. If researchers are good people and malicious hackers are bad people, and both categories do fundamentally the same thing, what is the personality difference between them?
Observationally, there is a growing perception that a possibly high percentage of hackers are neurodiverse; that is, on the autistic spectrum. A common characteristic of the autistic spectrum can include a difficulty in engaging in social interaction and the ability to spend long periods of time working alone. That is an image very close to the trope image of a hacker sitting alone in a darkened room in front of a computer.
None of our researchers have recognized this in themselves – indeed, the ability to socialize and communicate is considered important. All of them stress the value of attending events and conferences, meeting like-minded people and discussing issues. “Some of my best ideas,” said Silvanovich, “have come from meeting someone at an event, talking about what we’re each working on, and coming up with better ideas than we would we reach alone.”
This does not suggest that being on that part of the autistic spectrum that leads to social difficulties also leads to black hat hackers. But it does suggest that the ability to socialize and communicate is important for the legitimate researcher. For Silvanovich, the need is to be able to work alone and under her own initiative, when necessary, rather than a need to be alone.
There is sometimes an element of amorality to the researcher. While researchers – including Silvanovich – often have a very clear picture of their own moral position, they are slow to condemn opposing views. It is worth remembering the original meaning of the word ‘hacker’: it simply describes a person who investigates the inner working of objects by deconstructing the object. The purpose of the deconstruction is not part of the definition of hacker.
In the computer age, the word ‘cracker’ began to differentiate between amoral and immoral hacking, and was used to describe a malicious hacker – but it never truly caught on with the public. Hacker became the dominant word, used for both amoral and immoral deconstruction. But slowly, the immoral perspective has come to dominate the word. Today, for most people, a hacker is someone who deconstructs code or systems for malicious purposes, and then enacts those malicious purposes.
The moral hacker has become the researcher (but sometimes described as a ‘whitehat’ or ‘ethical’ hacker). Researchers, however, still tend to think of themselves as hackers, and are very aware of a fine line between amoral and immoral hacking. The process is similar to both, and only the end use of the process is different.
It is worth noting that researchers will also sometimes write exploits for the vulnerabilities they find. “If the exploitability is obvious, Project Zero won’t develop an exploit,” said Silvanovich. “Sometimes, however, it is necessary to develop the exploit to persuade the vendor that the vulnerability not only exists, but is serious.”
With this ability to find vulnerabilities and exploit them – common to all researchers – we asked if there is ever a temptation to go to the dark side. “Personally, I don’t feel that temptation,” she replied, “but I’m sure a lot [of researchers] do. I think it’s a bit of a spectrum. It’s not always unambiguous that certain work is good or bad. Yes, there are people who do crimes. But there are also people who sell vulnerabilities without necessarily knowing who they are ultimately selling to. There are other people who might sell to their government, which is a patriotic thing to do – but different governments have different levels of freedom. So, I don’t think that’s a clear area, black or white.”
Silvanovich has a very clear view of her own ethical position. “I’ve always wanted to work for vendors to be on the side of trying to secure software,” she said. “I’ve never personally had any temptation to do anything else.”
And yet her own history demonstrates the affinity between Whitehat and Blackhat hacking. “I did a fun project – science fair project – when I was 17. I wrote a virus that spread anti-virus software. That is the most terrible idea! Today I have to make a joke of it: ‘that’s the embarrassing thing I did in high school’.”
Silvanovich is reluctant to discuss the ethics of research in detail, saying, “Ethics is something that I don’t have a lot of expertise in – I don’t have a high level of knowledge on this.”
Natalie Silvanovich bucks the trend in independent research. She functions like an independent researcher but is a full-time employee. As an employee, she is released from the need to sell her discoveries, either to a vendor or a bug bounty program or to a vulnerability broker or even a criminal gang. On the other hand, she cannot just find more vulnerabilities to increase her income, which is fixed by her salary.
But perhaps above all, she demonstrates you don’t have to be born into being a researcher. You can, with a combination of luck, desire and response to opportunities, direct a traditional education and career path towards becoming a successful researcher. Before the internship opportunity, Silvanovich had only an interest in computers and how they work. It was her university education that set her on the path that led to Project Zero.
Read more from SecurityWeek’s Hacker Conversations Series Here.
Related: Hacker Conversations: Youssef Sammouda, Bug Bounty Hunter
Related: Hacker Conversations: Inside the Mind of Daniel Kelley, ex-Blackhat
Related: Project Zero: Zoom Platform Missed ASLR Exploit Mitigation
Related: Vulnerabilities Found by Google in 2021 Got Patched on Average in 52 Days
Related: Google Researchers Find Remotely Exploitable Vulnerabilities in iOS
Related: Facebook Pays $60,000 for Vulnerability in Messenger for Android
[ad_2]
——————————————————–