Twenty-five years ago I published a set of code review guidelines that I had crafted while working for a bank. I released them (thanks, SteveMac!) to get feedback and advice, because back then, there was exceptionally little in terms of practical advice on what we now call AppSec.
Looking back at what’s there: it’s explicitly a review document for a firewall group, taking code that’s ‘thrown over a wall’ to be run and operated by that group. The document includes a mix of design advice, coding requirements, and operational needs, along with some admin bits like setting the rule that the least positive review would be the one we record.
There’s some goodness in there: avoiding risky system calls, fuzzing, using lint and compiler warnings. Static analysis is just lint and compiler based — the first tools like RATS were not yet available. I had built or was starting to build a tool too embarrassing to release — it was a large shell script that used ldd and grep to find calls to dangerous functions. In hindsight, it was a small step forward. Competition in commercial tooling with companies like Coverity, Ounce Labs or Fortify was a good decade away, and memory safety in usable languages was not even a hint.
There was no concept that what we were doing was modeling threats, no hint towards standardizing how we got to an understanding of the code. The idea of paying a bounty on bugs was not unheard of (Netscape had a bounty program), but the idea that a bank would do so… I don’t think it ever came up, even over beer.
Also the laptops were .. clunkier.
If you’ve been around for a while, what else is brand new since you joined the field?