You can optimise Oracle performance in a myriad of ways. In many cases, the diagnosis might point to optimising existing application code.

But in some cases, you could be better off migrating your database platform to a hardware upgrade instead of increasing capacity.

This is the best option only when the business problem at hand justifies the additional expense of a new infrastructure.

There are certain situations when a hardware upgrade is the most cost-effective way to optimise Oracle performance and can deliver scalable performance to the business and its users.

The Optimisation Opportunity of Hardware Evolution

We are now living in the technological era where major advancements in infrastructure hardware can happen every year. A decade ago, it happened every 3 to 4 years.

This doesn’t mean that you need to upgrade your hardware every year. What it does mean is that with each passing year, a server upgrade can offer significant performance benefits under the right circumstances.

 

Download my FREE Oracle Technical Information Discovery template form for easy issue reporting SOP. 


A few years back at Oracle OpenWorld, Intel announced a 1 petabyte flash drive that was the size of a standard 30cm ruler. Redundancy concerns of a single device aside – this example illustrates the ability to provide much more capability to our database platforms in ways not possible in the past.

It wasn’t that long ago that creating a conventional storage platform the size of a ruler would require a major investment in storage technologies – not to mention the running costs associated with an enterprise-class platform.

Today, adopting these types of technologies not only optimises Oracle performance,  it also reduces the need to over-capitalise on infrastructure to address the performance demand of one application.

For example, if your trading application requires sub-second response times, then you have the option of right-sizing a specific hardware solution just for that application.

Adopting this approach doesn’t require the size of your enterprise infrastructure to match the peak workload of one outlier application.

The Barriers Increasing Complexity Create for Hardware Upgrades

If you look back into Oracle systems 10 to 15 years back, you’ll find a single server, that’s fairly self-contained, sitting in the cleaning closet. (Not literally, but you get the idea).

That is a rarity these days. Today, systems are much larger, are highly interconnected, run on a multitude of shared infrastructure and support the most critical workloads of the organisation.

The evolution of Oracle platforms has,made the idea of a hardware replatform as complicated as open-heart surgery. Complex, high risk and in some cases incurring fatal consequences when it goes wrong.

This hardware platform lock-in can make it difficult and expensive to adopt new technologies that have a direct performance and business benefit.

The common reasons for this lock in are:

  • Implementations lacking in software features that allow easy movement of the database platform between hardware platforms.
  • Point-to-point integration deployments that have a network dependency on a given database server.
  • Legacy database client connection models that are based on direct database connections.
  • Perception that moving, upgrading, enhancing Oracle database systems is in the “too hard” basket.

Over time technical debt accumulates as additional dependencies are created on the database platform. The expertise is often not available in the organisation to unpick the complexity, making these types of strategies difficult to formulate and execute.

It is possible to evolve the database platform over time to enable hardware independence – it just needs identifying and resolving of the correct problem areas before commencing any major changes.

With these complexities, the cost of a hardware migration needs to be carefully considered.

When does a Hardware Upgrade Work Well?

There are certain circumstances where adopting a hardware refresh strategy is a valid approach to optimising application performance. We have seen that systems with the following characteristics can adopt this approach most effectively:

  • There is little to no dependency on physical infrastructure. This occurs when the database platform is deployed on virtual infrastructure or is using database features such as Oracle Multitenant.
  • An application SQL that is fully optimised yet still needs to run faster. Newer, faster CPU’s and system bus on modern hardware can help in these cases.
  • Application workloads that can use parallel concurrent sessions effectively. Seen typically where SQL can run using parallel query – reports, ETL and batch jobs, or where the application allows for “workers” to process batch workloads.
  • Application performance that is directly dependent on database response times. These are directly dependent on the underlying infrastructure. If your performance problems are not directly linked to database response time then this approach may not work well.

What about the Cloud?

Whilst the cloud removes the need to provision physical infrastructure the concepts of adopting the fastest infrastructure for your database cloud services still applies.

The application and workload characteristics that benefit most from hardware upgrade will work equally well when adopting “newer” cloud services.

Moving workloads between legacy and more modern types can provide significant performance improvements, or deliver capacity that were not available before.

Factors to consider when adopting this approach on cloud infrastructure are:

  • Service contract durations. Whilst locking in a low price on a service commitment up front seems attractive, it may end up costing much more if that service is insufficient for the workload.
  • Ability to scale workloads between instance types. How easy is it to dial-up and dial-down capacity without having to go through a reprovisioning exercise?
  • Increasing importance of database software features to remove “platform” dependencies.

Whilst the cloud makes provisioning systems easier – it can make the job of optimising workloads to platforms more difficult.

Creating the Business Case for a Hardware Upgrade

Stage 1: Identify the important processes that need to run

The first thing that needs to be done is to accurately identify what final outcome is required out of the existing processes. You can then create the most direct and efficient route to it in your upgrade. Read more about it in this blog.

Stage 2: Identify the direct and indirection costs of the processes. 

The cost of the current processes need to be calculated so you can accurately determine if the outcome of an upgrade is fiscally responsible. Read more about it in this blog about the Cost of Poor Oracle Performance.

Stage 3: Identify if the root cause of the performance issue is hardware capacity related.

Identifying the root cause is the most challenging part in most cases, with a large majority of IT team misdiagnosing the issues due to communication issues. Download our Technical Information Discovery Template to help you avoid this major issue.

Stage 4: Cost the Upgrade

It is not uncommon to have hardware refresh cycles based around asset depreciation schedules. While this can be an effective strategy to “sweat” the asset as much as possible, it does not necessarily translate to the best use of the software licenses.

When evaluating a hardware upgrade, take the view that it is an opportunity to maximise the value of the software license. If you can run increasing workloads and achieve the business objectives with the same license footprint on newer, faster hardware, then you are likely to benefit from an overall cost point of view.

Executing the Upgrade

It’s not effective to simply dive into the hardware upgrade. Firstly, determine if the cost of an infrastructure upgrade is justified. This includes consideration of business growth, on a solution that is scalable. Then you need to look at what you’ll be carrying over to the new server.

Tune First Upgrade Second

Even when an upgrade or capacity increase is the right option, it is still necessary to tune and optimise the existing system. Without this crucial step, thousands of dollars can be spent on an upgrade that continues to have the same performance issues.





About the Author

Mark Burgess has been helping organisations obtain the maximum value from their data management platforms for over 20 years. Mark is passionate about enabling secure, fast and reliable access to organisations data assets.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.