I have always been a proponent of Function over Features. What this means to me is that I would much prefer to have software that functions well – with minimal bugs – than one with lots of features that intermittently works. This seems like a “duh” moment, but apparently a number of major software manufacturers don’t quite get it. For example, a major GIS software producer (they-who-must-not-be-named) seems to consistently put forth new versions that have an increasing number of features – many of which work intermittently, and some not at all. As an aside, I do like how their support web site ignores/ hides forum posts from past versions as if these problems have been fixed. I now have to use Google to find the answers in the old forum and the work-around for bugs that have existed for the last 2 versions and x service packs. Nevertheless, I believe that my life would be easier if I had a set of features that I could simply rely on, rather a suite of haphazard functionalities (not a word, but I live in Alaska). I recently calculated that during production and geo-analytical work, I spend about 25% of my time trying to work-around known or new bugs in the software. All though I selfishly appreciate this complexity as a form of job-insurance, I see the frustration in my clients daily. Since they-who-must-not-be-named are a monopoly, there aren’t any viable alternatives for these people. To the point, and what got me on this track to begin with, here’s an interesting article on another persons’ (somewhat ironically – Ray Ozzie) take on complexity.
http://ozzie.net/docs/dawn-of-a-new-day/
…But as the PC client and PC-based server have grown from their simple roots over the past 25 years, the PC-centric / server-centric model has accreted simply immense complexity. This is a direct by-product of the PC’s success: how broad and diverse the PC’s ecosystem has become; how complex it’s become to manage the acquisition & lifecycle of our hardware, software, and data artifacts. It’s undeniable that some form of this complexity is readily apparent to most all our customers: your neighbors; any small business owner; the ‘tech’ head of household; enterprise IT.
Success begets product requirements. And even when superhuman engineering and design talent is applied, there are limits to how much you can apply beautiful veneers before inherent complexity is destined to bleed through.
Complexity kills. Complexity sucks the life out of users, developers and IT. Complexity makes products difficult to plan, build, test and use. Complexity introduces security challenges. Complexity causes administrator frustration.
And as time goes on and as software products mature – even with the best of intent – complexity is inescapable.
Indeed, many have pointed out that there’s a flip side to complexity: in our industry, complexity of a successful product also tends to provide some assurance of its longevity. Complex interdependencies and any product’s inherent ‘quirks’ will virtually guarantee that broadly adopted systems won’t simply vanish overnight. And so long as a system is well-supported and continues to provide unique and material value to a customer, even many of the most complex and broadly maligned assets will hold their ground. And why not? They’re valuable. They work.
But so long as customer or competitive requirements drive teams to build layers of new function on top of a complex core, ultimately a limit will be reached. Fragility can grow to constrain agility. Some deep architectural strengths can become irrelevant – or worse, can become hindrances.