Why It Makes Sense To Tune Your Applications
Dennis Ford
Wednesday, May 06, 2015

Suppose you have a two hour commute to your job. Would you: (a) buy a faster car, (b) get a second car, or (c) move closer to your office?

Application performance is like a commute in many ways. There is a set start and end, but multiple paths that can be taken. There are hundreds of factors that can influence the time it takes to get from the start to the end. Many of the hazards and influences can be minimized, but at the end of the day you still need to get from the start to the end.

So, how do we minimize the time it takes to get from here to there?

The first step is to determine if you have a capacity problem or a throughput problem. In many cases, the knee-jerk reaction is to add capacity. The commonly held assumption is that labor is more expensive than hardware, so it makes sense to upgrade the hardware rather than tune the application. This may be true on a small scale, but doesn’t hold true as the size of the application increases (and in any case will not solve a throughput problem).

Application tuning is generally a fixed effort. Whether the application consumes five or fifty thousand MIPS you have to profile the code, determine what sections are most used and look for ways to re-structure that code. Regardless of the size of the code base, it’s likely that you’ll still only find a few major “hot spots” where the application is spending most of its time. Spending valuable resources on the analysis may not have a positive ROI. However, if the result cuts 70% of the processor utilization for a fifty MIP application, you can have real savings if the result cuts 70% of a five thousand MIP application.

Tuning an application is a non-trivial exercise. Modern compilers are very good at optimizing the machine code produced from the source code. Many times the code problem turns out to be structural, such as using the wrong type of data structures to represent the data; i.e. trying to use a matrix to represent a binary tree. Sometimes it can be architectural, such as trying to do string pattern matching using a language that doesn’t support those features. But in general, the 80/20 rule will apply; 80% of the CPU usage is going to be consumed by 20% of the application code, so this should be the focus.

Traditionally, the time for application optimization is before the next CPU refresh. If you can reduce the MIPS by tuning the application, the opportunity may exist to save money on the new hardware. However, if your software is usage-based, anytime is the right time to tune the applications running during your four-hour peak usage periods. Many products are available that purport to reduce the peaks by time-shifting the workloads. But instead of moving bad code around in the schedule, it might be more profitable to refine the code and reduce the peaks where they are.

As with most projects, tuning has its trade-offs. If you do it manually, it can be labor intensive and the staff costs go up. It is possible to buy automated tools to identify code that needs optimization which reduces the labor cost. However, this option increases the software cost. You may be able to reach a middle ground by bringing in consultants who specialize in application tuning and have their own tools available to analyze your applications. Based on the consultant’s past experience, they may also be able to help you to write optimized code going forward.

Not all capacity problems require hardware to fix; sometimes you just have to be sure that you’re not wasting the resources you have with inefficient code. The decision should ultimately be based on the scale of the project.

IT transformation starts
with a conversation.

We’ve helped companies like yours tackle some of their toughest IT challenges. Use the form to tell us a little about your business and the challenges you’re facing. A member of our team will follow up with you shortly to discuss.

All fields are required.

Let’s chat about what’s in store for your digital transformation.

Ready to explore?