A new twist on testing Ruby

posted by cjh, 22 June 2012

How do you know your tests are testing what you think they’re testing? That is, they pass when your code is right, but will they always fail when your code is wrong? Mutation testing is the only way to find out.

The estimable Ryan Davis, always a master of the art of the improbable (and often impenetrable!) gave the Ruby world Heckle. By hooking in to the internals of Ruby, it could return a snapshot of the compiled code, in the form of an Abstract Syntax Tree (or parse tree), modify it, and re-run the tests to make sure that something breaks.

Unfortunately for us, Ruby moved on, and no longer generates an AST. Instead it generates machine code for a virtual machine (VM). So the last working version of Heckle was for Ruby 1.8.7. It you’re still using that, perhaps you haven’t heard of the huge pointy stone things the Egyptians made recently.

Enter Twister. By installing a patched version of Ruby 1.9, the kinds of code mutation performed by Heckle become a core capability of the Ruby language. A single kernel method called set_parse_func provides a window into the parser, the part of Ruby that figures out what your code means. Not only can set_parse_func see things it probably shouldn’t, it can tweak them to make Ruby think it saw something different.

But wait, do I really want download and install Ruby with patches? Well, relax because it’s almost as easy as installing any other Ruby version, using RVM. Just download twister.patch to the file twister.patch and say

rvm install –head –patch twister.patch%1 1.9.3

and you’ll have a not-quite-normal version of Ruby 1.9.3 installed and ready to twist. A word of caution though: Make sure the path to your copy of ‘twister.patch’ doesn’t have any spaces in it, it seems to confuse RVM. Check that it says “Applying patch ‘twister.patch’ (located at…”

Now, you want RSpec. Well, I’m still working on that, but you can track at my work in progress on Github. The –twist option is the one you want to try.

Why are enterprise software products so badly designed?

posted by cjh, 11 May 2012

The question was asked, about a poorly-designed enterprise database, surprisingly enough by an old hand who’s managed major development programs in the software industry:

Why was this system designed to be like this?

If it’s anything like the enterprise databases I’ve encountered, it was never designed at all. It just grew by people throwing one thing after another at it, without anyone ever trying to understand or optimise the big picture. This is exacerbated by success; as soon as a system or especially a product becomes a success, it grows large, which means no-one is willing to understand it all just to add one new feature. In addition, it accrues an entire ecosystem of extensions, add-ons and other kinds of dependencies. No-one can ever know what all the schema dependencies are so nothing can ever be changed - the only path is to add new things. Any mistakes made in the early data modelling are magnified a thousand times as new additions are made that work around the inability to change the basic structures. The result is best described by the acronym BBOM - Big Ball Of Mud.

If, as often happens now-a-days, the original model is a semi-automatic or simplistic mapping of an object-oriented schema, the problems are much worse, even if that O-O schema was carefully designed and really rather good. In fact, perhaps especially if it was rather good, since that reflects the owner’s belief in the supremacy of the O-O model, and hang the mere storage concerns. Chuck that object model over the wall and let Hibernate deal with it.

Adding to this woe is the fact that enterprise markets enshrine mediocrity. The product that just installs and works produces no on-going consulting revenue (from junior “consultants” being charged out at five or ten times their market value as employees). My point? It’s the same consulting companies who get to write the purchasing recommendations, since no CIO will write a seven- or eight-figure cheque without first getting a consultant’s recommendation to protect their career. As a result, enterprises buy the worst product that can, with lots of help and constant ongoing coddling, be coerced into doing most of the job. So the markets are dominated by products like SDM, SAP, and a hundred others equally as horrendous. The purchasing process is simply not rational and informed, as my correspondent seems to wish it was.

How do I know this stuff? Because I’ve spent three decades as an architect of three major software products that each mostly maintained their architectural integrity (thanks to my work) over a decade or more, dozens or hundreds of releases, millions of lines of code, many sub-products and spin-offs, being deployed in mission-critical roles on millions of computers. They ran or are running banks, telcos, stock exchanges, aircraft manufacturers, armies, national postal systems… and yet, because the market consistently preferred far inferior products, the products I designed, though successful by many measures, never became major cash cows and dominated to the point of excluding others from the industry. Can you tell I’m proud of my work? Yet I never made anything more than salary from all my hard work and the risks I (and my co-founders) took.

I’m not bitter about that… just realistic… I wouldn’t have it another way. In fact, I’m still somewhat hopeful that software purchasing will grow up and start saying no to this rubbish, but I doubt it’ll happen in my lifetime. It took four to six hundred years for accounting and banking to grow to the professional calibre we see today, and there’s still such skulduggery there. Until software engineers and product managers start showing the markets that a new standard of behaviour is possible, it will not become accepted and expected.

And that is why I believe that fact-based modelling is the key to the software industry entering its adulthood. The problem of complexification can only be solved by learning how to never take a major modelling mis-step, and by implementing software so as to minimise the exposure of modelling mistakes - so they can be fixed without rewriting half the world. This requires reducing dependencies on schemas to the minimum required to support each transaction’s semantics; which means no visibility of wide tables or objects, just the individual meanings encoded in those tables.

Configuring a modern Linksys ADSL2 router

posted by cjh, 25 March 2012

It took me a number of hours to get a basic ADSL2+ configuration working with a new Linksys X3000, which is a higher-end domestic ADSL2+ router with Wireless, Ethernet (of course) and USB storage. This blog says why, and what the solution was. I hope it helps someone.

But before I get on to the particular challenge of the X3000, I had to get the X3000 to connect to ADSL. This was also a challenge, even after I contacted TPG to find out what DSL mode to use and which of several versions of my usernames and passwords to enter.

Ok, so once the ADSL model connects, the key is this. Until the wireless security and password has been configured, almost no network traffic will be routed between the ADSL and internal networks (wireless, ethernet). This is true even if you aren’t using wireless - which is pretty likely, since the X3000 comes with wireless configuration disabled.

Before the wireless is secured, the X3000 diverts all outbound Internet requests to a web page with a bold warning. In order that the request can be made however, it does allow DNS requests to get through. All other traffic is blocked; so if you configure ADSL and Ethernet then test using the “ping” command, you get your IP addresses but no actual ping responses.

But why didn’t I see the security diversion and the big warning page? Here’s where Chrome came in.

By default, Chrome uses a web service instead of DNS, to help with auto-complete. The trouble with this is that it prevents access to the Internet through the modem in its security-diversion mode. The DNS-replacement web service is blocked, so your browser never finds out where to send requests. Note that this affects you even though you’re still using Ethernet and haven’t even looked at the wireless yet. So “ping” doesn’t work, and neither does web browsing. DNS works however, which causes some hair loss, because you know that most things are working ok. You never see the security diversion page because there’s never a request to divert.

So then you get the IP address for google.com, or whatever website you usually use to test connectivity, and enter that into the Chrome address bar. This should answer you immediately, no?

Weeelll, no. Chrome bites again.

Chrome also has a default mode to foil some phishing attacks. At least I think it was the anti-phishing measures. Either way, Chrome doesn’t want to render pages that have numeric IP addresses until after it has looked up the IP address in its list of known-bad sites, OR roughly 30 seconds has expired. So from the first time you tried to visit the new internal IP address you set for your local network (assuming you changed it from 192.168.1.* (as I did really early on), it takes 30 seconds at least to fetch every configuration URL from the router. And guess what? The security redirect page has a couple of dozen items in it (images etc), so yuo basically never get to see the whole thing.

You can disable both these settings in the Chrome advanced preferences. Or you can switch to Firefox for configuration, which is what I did. Suddenly the router configuration pages started rendering quickly and completely, and when I got to the security diversion page, I was able to figure out why I’d been having troubles all along.

Anyhow, there it is, the results of a number of hours of my life wasted because some helpful software folk thought they’d make my life safer. And I suppose in a way it did; I might have been outside in the sunshine instead, and got hit by a bus.