Hi All,
We I'll assume that you would be using some data warehouse (DWH) environment for the reports, rather than an OLTP.
1. Now in normal circumstances, Bitmap indexes are best suited for DWH because there is huge data to handle, along with ad hoc queries and more importantly, change in data (transactions) is slow.
2. Bitmap consumes very little space too as compared to the common b-tree index. And with tables containing huge amount of data, this becomes an important aspect.
3. A common assumption also says that bitmap works best with columns having lesser distinct values or cardinality. But it is best to look for the amount of data updated from concurrent systems. If that's low, always use bitmap.
But remember that we talk about indexes when we mean performance tuning of the database or DWH.
In your case, probably you are looking to run a few reports faster which are taking some annoying time to open. So the best suggestion for you would be to place a proper cache mechanism. There are various ways in which you can manage the cache for your reports to run faster. You can easily find the methods on Google.
Other best practices may include:
1. Using star schema model wherever possible.
Snow-flaking affects performance. So if you can convert a snow flake to star by using some alias tables or multiple LTS go for it.
2. Using aggregate tables wherever possible.
3. Avoiding excessive use of functions like cast nvl, lpad, rpad, trim etc. or push them to the ETL phase.
4. Functions may kill the indexes and remember that every foreign key of your tables would also be playing as an index in the star or snowflake schema.
5. Avoiding unnecessary calculations or again push them to the ETL phase if possible. Get a proper DWH built by the ETL team, in accordance to the reporting requirements.
6. Using filter functions instead of case when statements.
Filter functions include an internal where clause which helps them to pick lesser data for comparison and hence perform better.
7. Using Union All instead of Union wherever possible.
8. Using materialized views when some complex queries are needed.
9. Some of the NQSConfig.ini parameters may also be altered for a better tuning.
And always check the NQQuery.log file to see the final query that is being executed for a report. That may also tell you what alterations might be needed for a better performance.
One thing I would like to bring to your attention is using the inside-Oracle-DB-OLAP engine to create cubes, if the source data is in an Oracle database.
Starting with 11.1.1.5 version, obiee now understands olap metadata. There is very minimal RPD work and all the aggregations and calculations are stored in database. So the benefits are:
(1). Greatly simplified aggregation strategy. One olap cube can replace dozens or even hundreds of relational MVs and aggregation tables.
(2). Complex calculated measures can be created very easily in olap cubes, using new Analytical syntax. These calculations automatically works at all levels of all dimensions. All types of Financial, statistical and time-series calculations are very simple to create.
(3). OBIEE generated queries are very simple SELECT...FROM...WHERE kind of queries. No long, multipage complicated sql queries.
(4). Cube loading is very quick due to new enhanced cube-compression features of 11.2.0.2 version of Oracle database.
(5). extremely fast query performance, compared to querying from relational tables or MVs.
(6). All types of hierarchies (parent-child, ragged, skip-level etc.) can be handled easily. No need for any kind of hierarchy changes for obiee.
(7). No new hardware or software or DBA required.
(8). only relational skills required.
So the keywords are SIMPLICITY and EASE when using OBIEE with Oracle-OLAP cubes. The power of new OLAP engine (with new 11.2.0.2 db features) is amazing. I am surprised that not many OBIEE consultants have started harnessing the power and simplicity of Oracle inside-DB olap engine.
Thanks,
We I'll assume that you would be using some data warehouse (DWH) environment for the reports, rather than an OLTP.
1. Now in normal circumstances, Bitmap indexes are best suited for DWH because there is huge data to handle, along with ad hoc queries and more importantly, change in data (transactions) is slow.
2. Bitmap consumes very little space too as compared to the common b-tree index. And with tables containing huge amount of data, this becomes an important aspect.
3. A common assumption also says that bitmap works best with columns having lesser distinct values or cardinality. But it is best to look for the amount of data updated from concurrent systems. If that's low, always use bitmap.
But remember that we talk about indexes when we mean performance tuning of the database or DWH.
In your case, probably you are looking to run a few reports faster which are taking some annoying time to open. So the best suggestion for you would be to place a proper cache mechanism. There are various ways in which you can manage the cache for your reports to run faster. You can easily find the methods on Google.
Other best practices may include:
1. Using star schema model wherever possible.
Snow-flaking affects performance. So if you can convert a snow flake to star by using some alias tables or multiple LTS go for it.
2. Using aggregate tables wherever possible.
3. Avoiding excessive use of functions like cast nvl, lpad, rpad, trim etc. or push them to the ETL phase.
4. Functions may kill the indexes and remember that every foreign key of your tables would also be playing as an index in the star or snowflake schema.
5. Avoiding unnecessary calculations or again push them to the ETL phase if possible. Get a proper DWH built by the ETL team, in accordance to the reporting requirements.
6. Using filter functions instead of case when statements.
Filter functions include an internal where clause which helps them to pick lesser data for comparison and hence perform better.
7. Using Union All instead of Union wherever possible.
8. Using materialized views when some complex queries are needed.
9. Some of the NQSConfig.ini parameters may also be altered for a better tuning.
And always check the NQQuery.log file to see the final query that is being executed for a report. That may also tell you what alterations might be needed for a better performance.
One thing I would like to bring to your attention is using the inside-Oracle-DB-OLAP engine to create cubes, if the source data is in an Oracle database.
Starting with 11.1.1.5 version, obiee now understands olap metadata. There is very minimal RPD work and all the aggregations and calculations are stored in database. So the benefits are:
(1). Greatly simplified aggregation strategy. One olap cube can replace dozens or even hundreds of relational MVs and aggregation tables.
(2). Complex calculated measures can be created very easily in olap cubes, using new Analytical syntax. These calculations automatically works at all levels of all dimensions. All types of Financial, statistical and time-series calculations are very simple to create.
(3). OBIEE generated queries are very simple SELECT...FROM...WHERE kind of queries. No long, multipage complicated sql queries.
(4). Cube loading is very quick due to new enhanced cube-compression features of 11.2.0.2 version of Oracle database.
(5). extremely fast query performance, compared to querying from relational tables or MVs.
(6). All types of hierarchies (parent-child, ragged, skip-level etc.) can be handled easily. No need for any kind of hierarchy changes for obiee.
(7). No new hardware or software or DBA required.
(8). only relational skills required.
So the keywords are SIMPLICITY and EASE when using OBIEE with Oracle-OLAP cubes. The power of new OLAP engine (with new 11.2.0.2 db features) is amazing. I am surprised that not many OBIEE consultants have started harnessing the power and simplicity of Oracle inside-DB olap engine.
Thanks,
No comments:
Post a Comment