[This is mirrored on Intelink-U]
In a previous post, I traced through the various policy documents that describe the certification and accreditation processes for the Department of Defense, ultimately tracing back to OMB Circular A-130. In summary, “systems” need accreditation, while “applications” do not, and the distinction (per A-130) turns on the highly subjective decision of whether it is a “major” application.
In tracing this definitional tangle, I unwittingly provided a roadmap for how to get your
system “application” on a DoD network without a full-blown Approval To Operate (ATO). I was not trying to provide an easy-out for getting operational without being accredited … although the method is a well-trodden path with a lot of history. I’m was trying to show that our C&A policy is at-least-slightly broken, and we generally don’t even understand it ourselves.
Again, to summarize, this is what not to do: since network enclaves are “systems”, (which do require accreditation) you find a network that is already accredited. You then host your
system application on that enclave, which generally means annotating the presence of your “application” in the security documentation for that network enclave. In many cases, this also means going through an “Approval To Connect” (ATC) process, which is generally defined by the owner/operator of that network enclave. This is easier than a full ATO. If anyone ever asks questions, such as, “Do you have an ATO?” or “Who is your DAA?” You gleefully point to the Enclave ATO and the Enclave DAA, and say “Yep, right there.”
A friend of mine (Hi, Alex) characterized this as “weasely.” Okay. Perhaps.
When I became the IT Operations guy for the Office of the Director for Program Analysis & Evaluation – now Cost Analysis & Program Evaluation (CAPE) – I inherited many such
systems applications. I was responsible for hosting all the web-based “applications” that are used to collect the long-range budget proposals for the DoD (the POM), the data-warehousing and business intelligence tools used to analyze that data, and collection and reporting tools for ancillary reporting data. None of these systems applications had their own ATO – but the network enclave did. In my opinion, they all should have had an ATO, but it was impossible to make this happen. As the hosting provider, I couldn’t write the SSAA for the system application owners (because I didn’t have the information necessary), and I couldn’t simply disconnect the application servers without hosing over my own customers.
Later, when I was responsible for something called the “DoD Storefront” project, I tried hard to get my system accredited. We were writing the SSAA, registering in VMS, eMASS… I wrote the designation memo to get my SES designated as the DAA, signed him up for DAA training, etc. The designation memo was to be signed by my boss’s boss – the DoD Deputy CIO, Mr. Dave Wennergren. I never got that far.
I got stopped by Wennergren’s deputy – who asserted, “Why do you need an ATO? This isn’t really a ‘system’. I think it’s just an application. Go talk to the Director of the DIAP(*) and see what she thinks.” (* Defense-Wide Information Assurance Program)
So I march off to talk to the Director of the DIAP. She asks me a few questions. One of the first questions she asks is: “Are you buying any servers?”
I say, “Of course not! Why, in the name of history, would I do that?!? That’s datacenter stuff. I mean, there will be servers – but I’m not gonna buy them. If I bought servers, where would I put them? I just want to be hosted somewhere. Datacenter guys buy the servers.”
This puzzles her. She thinks for a moment and says, “You sound like you know how the world actually works.” This puzzles me… I’m not sure what to say about that: “I don’t know about that, ma’am. I’m just trying to get my system on the net.”
She asks, “Could you go talk to Eustace King? He works for me. He’s responsible for re-writing the instruction on DIACAP. I’m a little concerned he doesn’t know how the world really works.”
Which is how I ended up having a 2-hour long conversation with the author/editor of DIACAP – which I’ve partly described elsewhere. Long-story-short, even though I really believed I needed an ATO – the author of DoDI 8510.01 told me Storefront was “just an application.” (With seven servers, it’s own firewalls, accessible to the entire Internet.) Suffice it to say, near the end of the conversation, I said something like, “You know, the world keeps on changing…”, and Eustace sighed, “Yeah, and I wish it wouldn’t.”
Both the conversation with Eustace and the text of OMB Circular A-130 led me to believe that many of our C&A concepts are rooted in an outdated view of how IT systems work. We expect “small systems” to be on a “LAN” and therefore low risk – and so the “enclave” will protect them. In the age of cloud computing and the web, this view is almost totally nonsensical. The problems with this approach go both ways – we over-protect some things, and under-protect others. In a “LAN mentality”, protecting the network boundary is really important, because the LAN is assumed “soft on the inside”, and we (reasonably) assume that if bad actors compromise one system, the whole network is compromised. But in the age of the “cloudy web”, we actually know how to isolate systems with significant efficacy. A well-designed DMZ isolates systems from the business network as well as from each other. On the other hand, being hosted on a secure “enclave” is meaningless if the firewall is configured to allow access from the entire enterprise, or worse, the entire Internet.
Likewise, our C&A processes don’t seem to have much provision for division of responsibility … an infrastructure service provider can be responsible for the security of the infrastructure, and the application service provider responsible for the application, but we require the ATO to address end-to-end risk. Hence the question, “Are you buying any servers?” In the old days, if you were fielding a system, you would be buying servers… maybe routers, switches and firewalls too. Those things are commoditized today – the datacenter operator does not want your non-standard server that doesn’t fit in his racks or with his management software, and as an app provider, you don’t want to worry about that cruft. I do not want to ever know or care what kind of physical server my apps run on. (I’ve done that. It sucks.)
In some future post, I’ll try to lay out a few of my ideas of how the policy should work.