Mark Rasch, who created the Computer Crime Unit at the United States Department of Justice, has an essay, “Conceal and Fail to Report — The Uber CSO Indictment.”
The case is causing great consternation in the InfoSec community partly because it is the first instance in which a CSO or CISO has been personally held responsible (other than by firing) for a data breach response, and the first time that criminal sanctions of any kind have been sought against the corporate victim of a data breach for handling (mishandling) the data breach itself.
Mark spends a lot of energy explaining the law of the case and some of the subtleties, for example: “It’s also clear that Uber and Sullivan did not want the FTC to know about the 2017 breach. But I’m not sure that, as a matter of law, this constitutes “misrepresenting, concealing or falsifying” materials actually produced to the FTC.” As someone who does expert witness work now and again, I’ve learned to recognize skilled analysis, and this is skilled analysis, the kind you’d want on your side, especially if you’re one of those with great concern.
I have a few small things to add, and one weighty one.
First, Joe Sullivan is innocent until proven guilty. There’s no need, and many opportunities for mistakes, in pre-judging him or the case. The indictment is literally written to make the case against him, and portray him in the worst possible light. There’s lots written in general about how being prosecuted is an emotionally wretched experience, and how even innocent people will plead guilty to reduce the shadow of uncertainty when prosecutors will ask for the maximum possible sentence for those who exercise their rights to a trial by jury. If you’re not familiar with that, searches such as “why do innocent people plead guilty” may be eye opening.
Second, CSOs are obviously concerned about what this means for them. My advice is to get your lawyer’s advice in writing. “My lawyer told me this was ok” is a pretty good defense. Keep a copy in your personal safe.
Third, I’ve long seen breach disclosure as a way to learn from our mistakes, and I’ve been struggling with what this case means for learning from breach disclosures. In reading Mark’s essay, I think it’s a net negative. We’re going to see substantially more caution from lawyers. That might mean:
- Less specific language used (if that’s even possible).
- More claims that “something may have happened, and we are reporting this to you out of an abundance of caution” sorts of language.
- More specific and factual statements, bookended by “we have a good faith belief based on what we know today”
I’m glad that my research has led me towards near miss analysis. The opportunity to demonstrate constructive engagement was important before. Mark describes Uber’s response as “less than ideal,” and speaking more generally, many responses to things which turn out to be somewhere on a spectrum from nuisances, incidents or breaches involve decisions that will be judged harshly with 20/20 hindsight. As a society, we would benefit from ways to demonstrate constructive engagement, to enable us to understand from the problems which are happening, and to draw more lessons from them.
I don’t claim that near miss analysis is the only frame for that, but in light of the prosecution, we should be thinking about ways to give those who want to do the right things incentives to do those things.