The rates
With the recent wave of out/off/shoring, rates for services in the IT sector have been, as the financial types like to say, “rebased”, which basically means that the IT market has evolved from one with high demand and scarce offer to one where the balance is more on the side of the demand. This has a lot of implications, most of them know already by the wise old IT crowd. But the focus on short term financial benefits can sometimes make the financially minded people deaf to the IT doomsayers.
Of course, you can find many examples where cost reduction in IT labor rates has not meant an associated reduction in quality, or increased maintenance costs, or increased opportunity costs. But this is probably not being noticed because what the media is focused on is the failures of these business models instead of the success. However, to be honest, the cases where out/off/shoring has provided gains in the financial bottom line AND at the same time increased quality of deliverables AND reduced on going maintenance costs are quite rare. Well, to be honest, I've yet to come across one of them.
Sorry, excuse me for the digression, back to the main topic. Well, not really, because the topic of cost is quite relevant when confronting someone who has already paid quite a bit of money to have a system up and running and now is being told that it has to pay even more to make the system survive to business organic growth. Note that the measure of “quite a bit of money” is entirely subjective. Even when employing the cheapest resources available, the system will have, from the point of view of the payer, cost a lot of money.
Now, try to open that without a locksmith
Tuning exercises are not cheap. Reasons include the fact that the job requires experience. A lot of it. Databases have lots of features, and SQL is a rich and expressive language. Each problem has multiple solutions and it takes a lot of experience as well as a lot of false starts to realize what are the best possible solutions for a performance problem.
Once you have experience, you need to drill into the data. Yes, performance problems do not have a “best” solution per se, and this is something I hope to develop in future posts. The same solution applied to different data patterns has completely different results. There are no magic recipes that can make a system go faster. Well, there are, but they can be like steroids for athletes. Abusing them is dangerous on the long term, but they give spectacular benefits on the short term. Mmmm... yes, abusing indexes is not going to make your life shorter. And no problem with sport authorities either. What a crap analogy, but the topic of magic tuning recipes will certainly be worth of future posts.
And finally, after all the experience and data knowledge, there has to be a lot of business engagement. Only by understanding the processes and background reasons for the a system design you can suggest good ways to evolve it performance wise. Business management are not comfortable with tech talk. They prefer to have communication in terms that they can understand instead of be mired into jargon.
The above skills are not easy to find. And it's likely to be at a professional level that is well above the cheap offshoring rates, both because the experience needed and the level of communication. So yes, it's difficult to understand. But last time you lost the front door keys while beign outside your house, you had to call a locksmith. The guy came and with a very small tool set and in very short time opened your supposedly burglar proof door. And it was expensive, very expensive.
At least that's what you remember. You surely don't remember that, without the locksmith' experience, skills, tools and time, you would had to pay for a new front door instead. Which would have been way more expensive.
Can I avoid all this by working smarter?
Hardly. First, it's very difficult, if not downright impossible, to anticipate the growth rate a business is going to have. Second, it's almost as difficult, if not more, to predict a system behaviour under load unless extensive background experience on similar systems is available. Or else, you're ready to set up a test lab. And you're ready to over specify the system hardware.
After all those reflections and a few back of the envelope calculations, you'll realize that it is just not worth even trying beyond having a reasonable expectation that the system will work in a year from now. Anything you plan to do today during next year is going to probably change in the next three months. If you are in those kind of business change pace, then you cannot positively absolutely set up any performance projections based on current business conditions.
Note that “change” in this context does not necessarily means business change but priorities as well. Even organizations that display the less amount of change to the outside observer can become literally paralysed by the rate at which priorities change internally. In fact, in some of them they change so fast that there is literally no time for the previous change to settle down or being completely finished (side note: those kind of business usually have applications that, in addition to performance problems, show some interesting common design patterns)
Even if your business environment is stable, you'll still have to make your back of the envelope calculations. Because more than a cursory examination will yield a result that probably is going to contain assumptions on data density, distribution and frequency.
Oh, yes, the sales manager says, each customer is invoiced monthly, just once a month. And the financial planning team has been spent six months detailing their processes perfectly fitted to the sales ones. After that big effort, they're not likely to change in years. Then comes a new sales director and customers are invoiced each time they get a delivery. And the financials for that are very different, so planners change their process. And the logistics need to update stocks daily. No, wait a minute, not daily but 48 times a day. All your careful performance assesment goes by the window in a minute.
Where's the business case then?
With all that said, probably the first question in the line would not pertain to database performance, but perhaps to application development methodologies. There are ways to prevent and mitigate the problem happening in the first place. But this discussion will always end in the same place, which is where to draw the line between business benefits and IT costs. And by applying sound business principles, the line will be much closer to the business priority side than you want. Face it, there's very little you can do about it.
But then comes the problem of justifying spending money on just making the application meet its performance requirements. If you're savvy enough, you already know that business arguments can happen in a golf course, while having a couple of drinks, or in the tennis court. And all those are valid ways for your arguments to win. But it's always best to have some rational arguments prepared, just in case your demands for additional money are rejected.
In theory, it should be easy. Just put a value tag on each of the activities that your application supports, add all them up and multiply by the number of times they could be done with better application performance, right?
In my experience, this is just the third best way of looking at the problem. Because the true business case for performance tuning usually is found on three fronts:
Survival cost - that's the easiest one. You need better performance to just keep doing business.
Opportunity cost - a bit more difficult. If you had better performance, what other process would you be able to support with the spare capacity?
Productivity - as said, the easiest one. Your plane has its engines turned on for a faster take off when loading is finished. Until the pick list is ready the plane cannot leave. How much fuel are you spending each minute you wait for the pick list?
Check your case, I'm sure that it falls in one of the three. And happy golf game.
No comments:
Post a Comment