I spend a lot of time working on issues of performance and scalability for clients systems. The bulk of this work is around database and application transaction processing as these areas typically have the greatest business visibility. An area that does not seem to have the same level of visibility is how operationally efficient these same environments are – not through neglect however, but as a result of operational support requirements not being included as a part of the core requirements during a design phase.
Database and application design efforts often focus on what the requirements are to deliver a certain level of response time for a given workload, to meet a defined set of availability requirements or be able to scale to a projected future growth requirement. This is understandable given that the bridge between non-functional requirements and the hardware and software infrastructure is usually defined, amongst other things, by a set of performance and availability related requirements.
Once a system is implemented into production the design of the infrastructure and supporting processes determine how operationally efficient the system will be for the lifetime of the system. It is not uncommon to see operational processes being defined by what infrastructure is available instead of what infrastructure is required.
Issues around the ability to meet operational support requirements may only become apparent after the system is live in production. Once the project has completed the implementation it can be very difficult to justify a business case to modify an existing infrastructure design to allow for more efficient operational procedures to be implemented. So we end up in a catch-22 situation where the system is operationally inefficient to support yet the cost of those inefficiencies does not justify the spend to resolve them. As database practitioners we need to ensure that clearly stated operational requirements are included in the non-functional requirements for the system. By including operational requirements in our design specs we provide the ability for all teams involved in delivering the technical solution to understand not only what the transactional workload will be but the operational support workload as well. Some examples of operational requirements are:
- A production to non-production eBusiness Suite clone needs to be completed in less than 2 hours whilst accommodating for a 35% increase in database size over the forecasted life of the infrastructure.
- Database cloning activities should not be performed across a production network that is used for application to database connectivity.
- Refresh of a standby database should not take more than 2 hours accounting for a 35% increase in database size over the forecasted life of the infrastructure.
Documenting and clearly stating production operational requirements provides an important input and guide as what processes need to be adopted to support the environment once in production. A clearly defined set of requirements will enable a Technical Architect or DBA to decide whether method a, b or c is the best approach to perform a repeatable task. For example when cloning eBusiness Suite the refresh of the APPL_TOP, COMMON_TOP, ORACLE_HOME and IAS_ORACLE_HOME directories requires some form of file transfer mechanism to copy the source environment to the target environment. If these directories are cloned once every 3 months then the process to complete that activity is likely to be different than a requirement to clone an environment on demand or on a weekly or daily basis.
Once the set of operational requirements and the high level processes required to support the environment are defined, both the requirement and the method are then available for the infrastructure teams to determine what configuration or provisioning may be needed beyond just delivering to the performance and availability requirement for the system.
By providing the infrastructure design teams with a well defined set of operational requirements we can enable these teams to justify configuration which may not be the “norm” in that environment. A common example is the de-facto method of using SSH port 22 for connectivity between servers – something that maybe very inadequate for copying a database of several hundred gigabytes around which would be better suited through an NFS share. With no visibility to operational requirements it can be difficult for Network and Security Architects to justify enabling the NFS protocol which may not be the standard in that environment or could require security exemption to do so.
Defining such requirements should not take a huge amount of effort however the benefits can continue to be realised a long time after the system has gone live.
Mark Burgess has been helping organisations obtain the maximum value from their data management platforms for over 20 years. Mark is passionate about enabling secure, fast and reliable access to organisations data assets.