Why are enterprise software products so badly designed?

posted by cjh, 11 May 2012

The question was asked, about a poorly-designed enterprise database, surprisingly enough by an old hand who’s managed major development programs in the software industry:

Why was this system designed to be like this?

If it’s anything like the enterprise databases I’ve encountered, it was never designed at all. It just grew by people throwing one thing after another at it, without anyone ever trying to understand or optimise the big picture. This is exacerbated by success; as soon as a system or especially a product becomes a success, it grows large, which means no-one is willing to understand it all just to add one new feature. In addition, it accrues an entire ecosystem of extensions, add-ons and other kinds of dependencies. No-one can ever know what all the schema dependencies are so nothing can ever be changed - the only path is to add new things. Any mistakes made in the early data modelling are magnified a thousand times as new additions are made that work around the inability to change the basic structures. The result is best described by the acronym BBOM - Big Ball Of Mud.

If, as often happens now-a-days, the original model is a semi-automatic or simplistic mapping of an object-oriented schema, the problems are much worse, even if that O-O schema was carefully designed and really rather good. In fact, perhaps especially if it was rather good, since that reflects the owner’s belief in the supremacy of the O-O model, and hang the mere storage concerns. Chuck that object model over the wall and let Hibernate deal with it.

Adding to this woe is the fact that enterprise markets enshrine mediocrity. The product that just installs and works produces no on-going consulting revenue (from junior “consultants” being charged out at five or ten times their market value as employees). My point? It’s the same consulting companies who get to write the purchasing recommendations, since no CIO will write a seven- or eight-figure cheque without first getting a consultant’s recommendation to protect their career. As a result, enterprises buy the worst product that can, with lots of help and constant ongoing coddling, be coerced into doing most of the job. So the markets are dominated by products like SDM, SAP, and a hundred others equally as horrendous. The purchasing process is simply not rational and informed, as my correspondent seems to wish it was.

How do I know this stuff? Because I’ve spent three decades as an architect of three major software products that each mostly maintained their architectural integrity (thanks to my work) over a decade or more, dozens or hundreds of releases, millions of lines of code, many sub-products and spin-offs, being deployed in mission-critical roles on millions of computers. They ran or are running banks, telcos, stock exchanges, aircraft manufacturers, armies, national postal systems… and yet, because the market consistently preferred far inferior products, the products I designed, though successful by many measures, never became major cash cows and dominated to the point of excluding others from the industry. Can you tell I’m proud of my work? Yet I never made anything more than salary from all my hard work and the risks I (and my co-founders) took.

I’m not bitter about that… just realistic… I wouldn’t have it another way. In fact, I’m still somewhat hopeful that software purchasing will grow up and start saying no to this rubbish, but I doubt it’ll happen in my lifetime. It took four to six hundred years for accounting and banking to grow to the professional calibre we see today, and there’s still such skulduggery there. Until software engineers and product managers start showing the markets that a new standard of behaviour is possible, it will not become accepted and expected.

And that is why I believe that fact-based modelling is the key to the software industry entering its adulthood. The problem of complexification can only be solved by learning how to never take a major modelling mis-step, and by implementing software so as to minimise the exposure of modelling mistakes - so they can be fixed without rewriting half the world. This requires reducing dependencies on schemas to the minimum required to support each transaction’s semantics; which means no visibility of wide tables or objects, just the individual meanings encoded in those tables.