By Sherman Wood and Julian Hyde; last updated November, 2007.
As with any data warehouse project, dealing with volumes is always the make or break issue. Mondrian has its own issues, based on its architecture and goals of being cross platform. Here are some experiences and comments.
From the Mondrian developer's mailing list in February, 2005 - an example of unoptimized performance:
When Mondrian initializes and starts to process the first queries, it makes SQL calls to get member lists and determine cardinality, and then to load segments into the cache. When Mondrian is closed and restarted, it has to do that work again. This can be a significant chunk of time depending on the cube size. For example in one test an 8GB cube (55M row fact table) took 15 minutes (mostly doing a group by) before it returned results from its first query, and absent any caching on the database server would take another 15 minutes if you closed it and reopened the application. Now, this cube was just one month of data; imagine the time if there was 5 years worth.
Since this time, Mondrian has been extended to use aggregate tables and materialized views, which have a lot of performance benefits that address the above issue.
From Julian:
I'm surprised that people can run 10m+ row fact tables on Mondrian at all, without using aggregate tables or materialized views.
From Sherman:
Our largest site has a cube with currently ~6M facts on a single low end Linux box running our application with Mondrian and Postgres (not an ideal configuration), without aggregate tables, and gets sub second response times for the user interface (JPivot). This was achieved by tuning the database to support the queries being executed, modifying the OS configuration to best support Postgres execution (thanks Josh!) and adding as much RAM as possible.
The process for addressing performance of Mondrian is a combination of design, hardware, database and other configuration tuning. For really large cubes, the performance issues are driven more by the hardware, operating system and database tuning than anything Mondrian can do.
As part of database tuning process, enable SQL tracing and tail the log file. Run some representative MDX queries and watch which SQL statements take a long time. Tune the database to fix those statements and rerun.
Mondrian currently uses 'count(distinct ...)' queries to determine
the cardinality of dimensions and levels as it starts, and for your
measures that are counts, that is,
aggregator="count"
. Indexes might speed up those queries
-- although performance is likely to vary between databases, because
optimizing count-distinct queries is a tricky problem.
The best way to increase the performance of Mondrian is to build a set of aggregate (summary) tables that coexist with the base fact table. These aggregate tables contain pre-aggregated measures build from the fact table.
Some databases, particularly Oracle, can automatically create these aggregations through materialized views, which are tables created and synchronized from views. Otherwise, you will have to maintain the aggregation tables through your data warehouse load processes, usually by clearing them and rerunning aggregating INSERTs.
Aggregate tables are introduced in the Schema Guide, and described in more detail in their own document, Aggregate Tables.
It isn't easy to choose the right aggregate tables. For one thing, there are so many to choose from: even a modest cube with six dimensions each with three levels has 64 = 1296 possible aggregate tables! And aggregate tables interfere with each other. If you add a new aggregate table, Mondrian may use an existing aggregate table less frequently.
Missing aggregate tables may not even be the problem. Choosing aggregate tables is part of a wider performance tuning process, where finding the problem is more than half of the battle. The real cause may be a missing index on your fact table, your cache isn't large enough, or (if you're running Oracle) the fact that you forgot to compute statistics. (See recommendations, above.)
Performance tuning is an iterative process. The steps are something like this:
AggGen
is a tool that generates SQL to support the
creation and maintenance of aggregate tables, and would give a
template for the creation of materialized views for databases that
support those. Given an MDX query, the generated create/insert SQL is
optimal for the given query. The generated SQL covers both the "lost"
and "collapsed" dimensions. For usage, see the documentation for
CmdRunner.
Mondrian may have performance issues if your schema makes intensive use of calculations. Mondrian executes calculations very efficiently, so usually the time spent calculating expressions is insignificant compared to the time spent executing SQL, but if you have many layers of calculated members and sets, in particular set-oriented constructs like the Aggregate function, it is possible that many thousands of calculations will be required for each cell.
To see whether calculations are causing your performance problem, turn on SQL tracing and measure what proportion of the time is spent executing SQL. If SQL is less than 50% of the time, it is possible that excessive calculations are responsible for the rest. (If the result set is very large, and if you are using JPivot or XML/A, the cost of generating HTML or XML is also worth investigating.)
It caches cell values retrieved from the database, but it does not
generally cache the results of calculations. (The sole case where mondrian
caches expression results automatically is for the second argument of the
Rank(<Member>, <Set>[, <Expression>])
function,
since this function is typically evaluated
many times for different members over the same set.)
Since calculations are very efficient, this is generally the best policy: it is better for mondrian to use the available memory to cache values retrieved from the database, which are much slower to re-create.
The expression cache only caches expression results for the duration of a single statement. The results are not available for other statements. The expression cache also takes into account the evaluation context, and the known dependencies of particular functions and operators. For example, the expression
Filter([Store].[City].Members, ([Store].CurrentMember.Parent, [Time].[1997].[Q1])) > 100)
depends on all dimensions besides [Store] and [Time], because the expression overrides the value of the [Store] and [Time] dimensions inherited from the context, but the implicit evaluation of a cell pulls in all other dimensions. If the expression result has been cached for the contexts ([Store].[USA], [Time].[1997].[Q2], [Gender].[M]), the cache knows that it will return the same value for ([Store].[USA].[CA], [Time].[1997].[Q3], [Gender].[M]); however, ([Store].[USA], [Time].[1997].[Q2], [Gender].[F]) will require a new cache value, because the dependent dimension [Gender] has a different value.
However, if your application is very calculation intensive, you can use the
Cache(<Expression>)
function to tell mondrian to store the results
of the expression in the expression cache. The first time this function is
called, it evaluates its argument and stores it in the expression cache;
subsequent calls within the an equivalent context will retrieve the value from
the cache. We recommend that you use this function sparingly. If you have cached
a frequently evaluated expression, then it will not be necessary to cache
sub-expressions or super-expressions; the sub-expressions will be evaluated less
frequently, and the super-expressions will evaluate more quickly because their
expensive argument has been cached.
Author: Sherman Wood & Julian Hyde; last updated November, 2007.
Version: $Id: //open/mondrian/doc/optimizing_performance.html#7 $
(log)
Copyright (C) 2005-2007 Julian Hyde and others