This is the first of a collection of articles describing the implementation of our “homegrown” Oracle Performance Warehouse, and the “aha-effects” we triggered during our journey
In 2012 Mobiliar decided to migrate their whole oracle landscape, running on two P595 on AIX, to seven physical blades on Red Hat Linux. 450 instances on 45 LPARs had to be moved to the new platform. The question we soon faced was: “How should we distribute the instances on the different servers in order to achieve the best use of the CPU power (and with that of the licenses, too), and at the same time avoid peeks that could create a real traffic jam on the machine?”
We had to compare workloads from multiple instances. But how? Cloud Control is only capable of comparing two workloads by the AWR compare functionality. Our need was to add and remove dynamically several databases and visualize the performance consumption of each of the selected instances.
The first approach proposed by our managers was “Well, just insert performance data into an excel file.”
I tried to follow the suggestion, but after a couple of hours I was really hit by rash!
We are database administrators, there should be a better way to collect accurate data automatically and do some ad-hoc-visualization. The solution was the database itself. Performance Data is stored in every instance. The v$syssat view contains several performance metrics, and the data is stored hourly in AWR and made persistent in the dba_hist_sysstat view.
So we decided to collect all the historical data contained in the database in a new separate instance, called the Mobiliar Performance Warehouse and do some analytics on the aggregated data.
At the same time we decided to build a graphical interface with APEX to visualize the different performance metrics on an hourly and daily basis.
Indeed we began to dig in the AWR Data and in the Data Dictionary of our instances. The next articles will tell how we began finding more and more “golden nuggets”.