As a software developer I would answer ‘yes’, primarily because it makes my life easier – I suspect that all professional developers would agree. The traditional argument in favour of not being a code cowboy is presented very well in this article. Yet I’ve struggled over the last few days as I transit from the code monkey to the man with the business hat on, to make a compelling case for this being true in all industries.
I’m going to use the word ‘maintainability’ in a very general sense, partly because otherwise I’d have to scour the web looking for the definition most aligned with my own position, but also because I believe the definition drives the justification. Experience and pragmatism has made me aware that all software is ultimately maintainable in line with the infinite monkey theorem – therein is the root of my problem.
A few days ago I was asked by one of my fellow directors if there was a solid case for using SSIS as an ETL tool instead of a combination of .NET data access code and T-SQL. We’ve recently partnered with another of his companies to produce some BI extensions to their software, although as we’re 1 day away from shipping it was more of a theoretical question. His business case for not using SSIS was his other company didn’t currently use it and so the lack of in-house skills in tandem with the fact that it counts as an additional deployment platform meant that providing support would incur greater overhead and risk. Of course I trotted out all the arguments about it being more maintainable, scalable, reliable, etc. His response to this being “…that’s all well and good for the developers, but what are the benefits to the customer?”.
In theory one can cite agility as a customer benefit, as (in theory) it means that new functionality can be rolled out far quicker than for badly engineered software. But then I considered a lot of the work I’ve done in the past few years and it seems that the companies where poorly engineered software was prevalent were the ones that could afford to pay for large numbers of monkeys – even if they can’t stretch to infinity – without going into the red, in order to deliver new features quickly. In fact, many of these code monkeys have cited similar arguments as a justification for not even trying to follow best practice (although hubris does play its part).
For example, a few years ago I worked on the website of a large airline. The website was implemented in type-unsafe VBScript on Classic ASP with a SQL Server 2000 back-end (and no middleware). The absence of intelligent architecture and prevalence of spaghetti code was enough to make any half-decent developer weep. Yet they wanted to add more functionality to the website despite the in-house developers hitting entropy. So they just hired in a bunch of developers from a consultancy at around £1000 a day each and set them to work hacking new functionality – which despite much swearing at code, they did very successfully. £150k+ a month may seem a high price tag to pay for not writing decent code in the first place – but these developers were working either alone or in pairs to produce functionality on 30 day sprints that was raising anywhere between £50000 and £1m extra revenue per month in perpetuity per project.
More recently I worked on a major data warehousing project for a public sector body to replace a system which was taking around 3 days to process less than 12 million rows of data from a text file (although in fairness the width of data was around 1500 fields). The vendor of the existing system was adding new functionality all the time, but was unable to rectify core performance or reliability issues (processing would regularly fail). Despite its failings the organisation in question worked with the system for 3 years, during which time I estimate it cost over £1m in labour inefficiency – but they were able to absorb the cost while continuing core operations (albeit somewhat unreliably). However the replacement system reduced processing time to between 2 and 4 hours which enabled them to spend less time crunching data and more time analysing it (the raison d'être of the organisation). The smaller processing window also added value because it meant that time-sensitive data could be analysed – something which fundamentally altered the strategic capability of the business.
In the case of the airline, customer experience was unaffected – performance of the website was “good enough”. So the only rationale for maintainability would be to save around £1.5m a year (out of a £2.5bn turnover). Placed against the risk of replacing the entire codebase I can see why the status quo has held. So I conclude that:
- Larger organisations have the resources to compensate for inefficiencies because the revenues are so high and the risk is lower than replacement
- Small businesses with more fragile cash flow would undoubtedly benefit from better software engineering and use of existing frameworks
However there is a middle ground of businesses and government agencies, that could vastly improve the quality of their operations if their systems were more maintainable and developed according to best practice – in the meantime they’ll just soldier on with the proverbial sticking plaster.
As for me, I’ll stick with SSIS for ETL because it’s cheaper for the business and there’s less risk in supporting it than trying to support a mass of spaghetti SQL. Performance would be adequate with either solution – data volumes aren’t likely to increase to the point where any performance difference is noticeable (although if they do then I’ll be glad I chose SSIS). The customer doesn’t care one way or the other as long as things keep working – so the technical argument wins over.