By that time is was clear that the threat to hosted applications had moved up the stack, and that the center of gravity had shifted towards compromising the web applications rather that the hosting infrastructure. This meant that our applications, for which essentially no serious security related effort had been made, had to finally receive some attention. Our development teams were not tuned in to the security landscape and thus were paying scant attention to web application security. As our home-grown applications exposure to the Internet was mostly limited to simple, student facing functionality such as course registration and grading, the lack of attention was perceived as appropriate by all but a few of us infrastructure and security geeks.
The catalyst for me was the planned deployment of functionality that permitted student aid and payroll-like functionality to be modified from an Internet-facing web application, including the ability to maintain direct deposit accounts (I.E banking information). The development team made a perfunctory risk analysis that recommended no significant changes to the current web application, but rather suggested that implementing Oracle TDE (database encryption) would somehow mitigate risk.
I found out what was being planned when my team was asked to implement TDE and our security team ask the obvious question – Why TDE, and why now? The answer – that application security was difficult, would delay the implementation, and therefor out of scope; and that TDE was all we needed – was unsettling, to say the least.
Myself and a couple of the security team members were shocked. At the time the web applications were protected by insignificant security controls, no identity protection whatsoever, 1995 appropriate authentication and authorization, and trivial database security. The web apps were written with no structure or standards for appdev security and the teams were mostly unaware of common attack vectors, SQL injection, XSS, or even the work done by OWASP. But our dev teams were intending to allow that application to modify direct deposit account information. And of course if the web application was not secure, TDE would not add any security either.
As I was in charge of the team that configured the load balancers and load balancer config was necessary to enable the deployed app to be visible on the Internet, I decided to block the deployment of the app and made it clear that I would stand firm until the application had been appropriately secured. I figured that either our CIO would tell me in writing to deploy the app, or I’d get sent to HR for insubordination, or I’d offer my resignation, or I’d bring Internal Audit into the loop and let them take the heat.
Myself and the security team laid out a minimum set of requirements for the application and hosting infrastructure and formally presented the requirements to leadership. The application was eventually re-written and deployed – two years later, after most of the recommendations were implemented. The fallout from mine & the security teams action was that we started a semi-serious app-sec program that resulted in app-sec training for all developers and the eventual implementation of basic application development security practices.
In retrospect, the disconnects were
- we had security and infrastructure teams that were very well acquainted with state-of-the-art threats and mitigations but unware of our app dev practices
- an application development team that was blissfully unaware of Internet-born threats and web-app security practices.
- I ran the infrastructure (server, network database) teams and we worked closely with the Security team, but we both were somewhat isolated from application development. They didn’t learn from us.
- we in security and infrastructure had been detecting and cleaning up after web-based compromises for a decade – something to which the application developers had no exposure.
- that dev teams were caught up in vendor propaganda that asserted that the Java/J2EE/Oracle stack was somehow inherently secure such that a sane SDLC was not necessary.
Part of the disconnect was leaderships reluctance to share security related information, particularly the information related to successful compromises of our colleges, other EDU’s and other providers.
Part 6 – Building Out Disaster Recovery
*** This is a Security Bloggers Network syndicated blog from Last In – First Out authored by Michael Janke. Read the original post at: http://feedproxy.google.com/~r/LastInFirstOut/~3/WZlJqmzY9MY/thirty-four-years-addressing.html