Breakfast of ChampionsMay 17, 2013 Tweet
Rin Tin Tin was a champion. Today, I’m going to talk about what he ate for breakfast — dog food.
Here at GrammaTech, we've been "eating our own dog food" since Day One. It’s a longstanding tradition in leading software development firms to use your own products in-house. Use the systems you build for others and you’ll discover first-hand the strengths and weaknesses of those systems. Direct experience with a product also leads to better project planning. Instead of developing technology for its own sake, you better understand the needs of the customer.
This skin-in-the-game motivates you to make improvements that matter. Well, that’s the theory, anyway. In this blog, I’ll tell you about some of our experiences with dogfooding. As with many transitions of theory-to-practice, it hasn’t all been wonderful.
Eating our Own
Like every new engineer at GrammaTech, when Junghee, a recent hire, started, one of her first tasks was to create an account on our "Dogfood Hub." We call the web-based interface to CodeSonar (GrammaTech's static analysis tool) a hub. Hubs contain warnings about security vulnerabilities and other software-related headaches that CodeSonar finds in given C, C++, and Java code.
For the Dogfood Hub, that input is CodeSonar source code — in other words, the very same software that finds the vulnerabilities and drives the hub.
Initially, Junghee browsed the Dogfood Hub and saw that CodeSonar isn't perfect. I know you want the dirt, so here it is: she was looking at 23K active warnings in 6.5 million lines of code. That’s a lot of warnings. It didn't have to be that way. Mostly, we did it to ourselves. We cranked up the settings to the mission-critical levels our customers typically require for their products. And we analyzed open-source third-party and rather deviant, generated code as well as ours. Junghee also knew that the warnings weren't her fault because she hadn't done any programming yet — responsible engineers had already been assigned to deal with them.
Let me digress for a moment and tell you about the way we assign warnings to engineers. Each CodeSonar warning includes both the name of the file where the problem occurs and an “owner” attribute. Generally, people assign warnings to owners based on staff availability, familiarity and skills. CodeSonar also supports user-supplied plug-ins that may apply newly discovered warnings to enterprise-specific tasks, such as report generation.
At GrammaTech, where we use Subversion for revision control, our plug-in gets the file name for each warning, sets the owner attribute to whoever modified the file last and, after the analysis is finished, sends everyone their new assignments along with links to corresponding reports on the hub. Although the last person to change the file might not be the best person to fix the problem, they usually know who would be best and pass it on. Managers get the full list of initial assignments.
Imagine Junghee's surprise when she was assigned hundreds of warnings in her first week on the job. How could a one-line change cause so much trouble? In this case, the offending code was in a header file that landed at the end of many execution paths. A little typo can generate a lot of noise. But you knew that. Junghee saw and fixed the problem quickly and the warnings disappeared in the next analysis.
Running with the Dogs
We use the open-source Buildbot system to keep the Dogfood Hub running and kick-off a nightly analysis. This analysis runs under our in-house test control system. That makes dogfooding just another test in our suite. The system challenges the product while the analysis is underway and beyond, logging in to the hub, browsing results, conducting searches, and things like that. It also mines logs for anomalies and reports levels of success and failure.
Dog food test results have helped us find and fix all kinds of problems pre-ship — slow queries, upgrade failures, broken plug-in support, undesirable warning gains and losses, excessive disk usage, incompatible saved searches, unintuitive reports, and bad metrics. In addition, choosing different machines for the hub and analyzer has exposed communication bottlenecks and platform incompatibilities.
Our dog food setup forces us to experience the long-term care and feeding of an installation. This includes upgrading the hub to a new release, taking continuous backups of the hub database, restoring the database from backup, and moving the hub after hardware failures. There’s nothing like a treasured database that’s been growing for years to help you get religion about the importance of these tasks and the role of both functionality and documentation in making them go smoothly.
Try it – you'll like it?
If you’re thinking about running your own dog food experiment — or already have one in place — here are some things we did that you might be able to adapt for yourself.
We actually have two dog food hubs. Both take the current development version of CodeSonar as input. The hub Junghee uses is a fielded version, so she's on a known good application that finds problems in today’s code. The second one — we call it the "Bleeding Edge Hub" — is a development version. If you decide to do something like this, don’t say I didn't warn you. The database is large, both the engine and inputs are volatile, and this hub can be a maintenance nightmare. The program crashes, processes are left behind and the disk fills with logs, stack traces and, to continue the analogy, other “crap”, however valuable.
For the fielded version, we schedule the analysis to run at night so interactive users have full use of system resources for browsing results and visualizing code throughout the day. The analysis on the Bleeding Edge Hub runs whenever the code base changes; in other words, continuously.
Besides CodeSonar itself, we run CodeSonar on other projects, such as prototypes and research deliverables. These projects have different goals, so they exercise different product features.
While we’re developing a release, we’re also developing the control system that runs our automated tests, including dog food. This can lead to incompatibilities. For example, if the control system sets a configuration value that’s in development but not fielded, without intervention the test will fail with an unsupported value error. Since we always want to be using the latest control system, we just adjust the system to handle the legacy usage.
Kibbles 'n Bits
As you can see, dogfooding is a mixed bag, both chewy and crunchy. Despite the hiccups, we like it. The Dogfood Hub has been an inspiring use case and a great test of product quality, robustness, performance, and scalability.
Plus, something else we didn't expect. The first analysis on our hub dates back to our beginnings. This makes it something of an historical document, chronicling the growth of GrammaTech's tale.