There are several ways to export data from Essbase on a large scale. Pulling it via Excel (Smart View or the Essbase Add-In) is not the best way to get large amounts of data when the goal is to move the data somewhere else, so this option will not be covered.

Database Export

The easiest method is to export all the data from a database by exporting the database.  This can be done in EAS.  This method is easy to automate with Maxl, but has little flexibility with formatting and the only option is to export all the data.  It can be exported in column format so the data can easily be loaded into another data repository.  If the data needs to be queried, or manipulated, this is a good option.   Read more

 

The format of the data that is loaded to Essbase is often an after-thought.  But, should it be?  When requesting the data file from a source system, it is more important than you may think to have it sorted to mirror your outline.

Assume an outline has the following dimensions.

  • Period [DENSE]
  • Account [DENSE]
  • Region [SPARSE]
  • Category [SPARSE]
  • Product [SPARSE]
  • Organization [SPARSE]

The most efficient way to receive a data file would be to have it sorted by Organization, Product, Category, Region, and then Account.  Data files load faster when the columns that hold the sparse members are sorted in reverse order of the sparse dimensions that exist in the outline.

The reason the data loads faster is because it opens a block of data only one time.  If the data was sorted by the dense members first, then every block would have to be opened multiple times.  If the same sparse member combinations have 3,000 dense members with data, the block would be opened up to 3,000 times.

There are some more important benefits of doing this, however.  When the block is opened multiple times, the database becomes far more fragmented than it needs to be.   Fragmentation causes calculations to be slower and retrieving data can also be impacted, which can lead to frustrated customers.

By not sorting the data when loaded, every time a data load occurs, any performance issues that may exist are exacerbated.  So, anytime possible, sort the data load files by the last sparse dimension in the outline, the second to last sparse dimension in the outline, and so on.  You may be presently surprised at the benefits.

 

Everybody knows the quickest way from point A to point B is a straight line.  Everybody assumes that the path is traveled only one time – not back and forth, over and over again.  I see a lot of Essbase calculations and business rules, from experienced and novice developers, that go from point A to point B taking a straight line.  But, the calculation travels that line multiple times and is terribly inefficient.

Here is a simple example of a calculation.  Assume the Account dimension is dense, and the following members are all members in the Account dimension.  We will also assume there is a reason to store these values rather than making them dynamic calc member formulas.  Most of these are embedded in a FIX statement so the calculation only executes on the appropriate blocks.  To minimize confusion, this will not be added to the example.

Average Balance = (Beginning Balance  Ending Balance)  / 2;
Average Headcount = (Beginning Headcount   Ending Headcount) / 2;
Salaries = Average Headcount * Average Salaries;
Taxes = Gross Income * Tax Rate;

One of the staples of writing an effective calculation is to minimize the number of times a single block is opened, updated, and closed.  Think of a block as a spreadsheet, with accounts in the rows, and the periods in the columns.  If 100 spreadsheets had to be updated, the most efficient way to update them would be to open one, update the four accounts above, then save and close the spreadsheet (rather than opening/editing/closing each spreadsheet 4 different times for each account).

I will preface by stating the following can respond differently in different version.  The 11.1.x admin guide specifically states the following is not accurate.  Due to the inconsistencies I have experienced, I always play it safe and assume the following regardless of the version.

You might be surprised to know that the example above passes through every block four times.  First, it will pass through all the blocks and calculate Average Balance.  It will then go back and pass through the same blocks again, calculating Average Headcount.   This will occur two more times for Salaries and Taxes.  This is, theoretically, almost 4 times slower than passing through the blocks once.

The solution is very simple.  Simply place parenthesis around the calculations.

(
Average Balance = (Beginning Balance  Ending Balance)  / 2;
Average Headcount = (Beginning Headcount   Ending Headcount) / 2;
Salaries = Average Headcount * Average Salaries;
Taxes = Gross Income * Tax Rate;
)

This will force all four accounts to be calculated at the same time.  The block will be opened, all four accounts will be calculated and the block will be saved.

If you are new to this concept, you probably have done this without even knowing you were doing it.  When an IF statement is written, what follows the anchor?  An open parenthesis.  And, the ENDIF is followed by a close parenthesis.  There is your block!

"East"
(IF(@ISMBR("East"))
    "East" = "East" * 1.1;
ENDIF)

I have seen this very simple change drastically improve calculations.  Go back to a calculation that can use blocks and test it.  I bet you will be very pleased with the improvement.

 

 

Almost every planning or forecasting application will have some type of allocation based on a driver or rate that is loaded at a global level.  Sometimes these rates are a textbook example of moving data from one department to another based on a driver, and sometimes they are far more complicated. Many times, whether it is an allocation, or a calculation, rates are entered (or loaded) at a higher level than the data it is being applied to.

A very simple example of this would be a tax rate.  In most situations, the tax rate is loaded globally and applied to all the departments and business units (as well as level 0 members of the other dimensions).  It may be loaded to “No Department”, “No Business Unit”, and a generic member in the other custom dimensions that exist.

If a user needs the tax rate, in the example above, they have to pull “No Department” and “No Business Unit.”  Typically, users don’t want to take different members in the dimension to get a rate that corresponds to the data (Total Department for taxes, and No Department for the rate).  They want to see the tax rate at Total Department, Total Business Unit, and everywhere in-between.

There are a number of ways to improve the experience for the user.  An effective solution is to have two members for each rate.  One is stored and one is dynamic.  There is no adverse effect on the number of blocks, or the block size.  The input members can be grouped in a hierarchy that is rarely accessed, and the dynamic member can be housed in a statistics hierarchy.

Using tax rate in the example above, create a “Tax Rate Input” member.  Add this to a hierarchy called “Rate Input Members”.  Any time data is loaded for the tax rate; it is loaded to Tax Rate Input, No Department, No Business Unit, etc.  Under the statistics/memo hierarchy, create a dynamic member called “Tax Rate”.  “Tax Rate” would be the member referenced in reports.  The formula for this includes a cross-dimensional reference to the “Tax Rate Input” member, and would look something like this.

“No Department”->”No Business Unit”->”Tax Rate Input”;

When a user retrieves “Tax Rate”, it always returns the rate that is loaded to “No Department,” “No Business Unit,” and “Tax Rate Input,” no matter what department or business unit the report is set to.  The effort involved in creating reports in Financial Reporting or Smart View now becomes easier!

There is an added bonus for the system administrators.  Any calculation that uses the rate (you know, the ones with multi-line cross-dimensional references to the rates) is a whole lot easier to write, and a whole lot easier to read because the cross-dimensional references no longer exist.

Before you move the application to production, make sure to set the input rates consolidation method to “Never.”  Don’t expect this change to make great improvements in performance, but it will cause the aggregations to ignore these members when consolidating the hierarchies.  A more important benefit is that users won’t be confused if they ever do look at the input rates at a rolled up level.  The ONLY time they would see the rate would be at level 0, and would be an accurate reflection of the rate.

Note:  It is recommended to create member names without spaces.  The examples above ignored this rule in an effort to create an article that is more readable.

 

Working with people new to Essbase every three to six months, I am always looking for ways to show users their hierarchies effectively. Many of them don’t have access to Essbase administration services or EPMA.  So, I always fall back to excel as a distribution method, as well as documentation, to show hierarchies.

Expanding hierarchies to all descendants is a great way to show small hierarchies, but, I am always asked to make it a collapsible hierarchy using the Excel grouping feature. The challenge of doing this manually to a hierarchy with thousands of members is that it is extremely time consuming and very error prone.

The following script can be added to any workbook to automate this effort.

Sub CreateOutline()
    Dim cell As Range
    Dim iCount As Integer
    For Each cell In Selection
        'Check the number of spaces in front of the member name 
        'and divide by 5 (one level)
        iCount = (Len(cell.Value) - Len(Trim(cell.Value))) / 5
        'Only execute if the row is indented
        If iCount <> 0 Then cell.EntireRow.OutlineLevel = iCount
    Next cell
    MsgBox "Completed"
End Sub

Setup

First, this sub routine has to be added to a workbook.  Open up the visual basic editor. Right click on the workbook in the project explorer window and add a new module. Paste the code above in the new module.  The editor is in different places in different version.  In Excel 2007 and 2010, the Developer ribbon is not visible by default.  To make it visible, go to the navigator wheel and click Excel Options.  There is a checkbox named Show Developer Ribbon that will make this developer ribbon viewable.

How To Use

First, open the member selection option in the Essbase add-in or smart view and select the parent.  Add all its descendants.  Alternately, change the drill type to all descendants and zoom in on the member of the hierarchy.

Retrieve, or refresh, the data, and make sure the indent is set so the children are indented.  Now, highlight the range of cells that has the hierarchy/dimension that the grouping should be applied. This should include cells in one column of the worksheet.  Open the code editor and place the cursor inside the sub routine you added from above and click the green play triangle in the toolbar to execute the script.  When this is finished, go back to the worksheet with the hierarchy and it will have the hierarchy grouped.

Excel limits the level of groupings to eight. If the hierarchy has more than eight levels, they will be ignored. Now, the hierarchy can be expanded and collapsed for viewing.

Shortcut keys or toolbar buttons can be assigned to execute this function if it is used frequently. If you are interested in doing this, there are a plethora of how-to articles on this topic.  This Google search will get you started if you choose to go down that path.

So, the next time you need to explain a hierarchy in Essbase, or distribute it in a common format, hopefully this script will help.

 

Introduction

Many companies have in depth working knowledge of Hyperion Essbase and are looking to enhance their enterprise reporting capabilities to the next level. Companies typically have specific processes and calculations that set them apart in their industry. However, they are often limited to basic reporting capabilities provided by the standard functions in Essbase. Additionally, complex operations can quickly become arduous using Calculation Scripts and Business Rules. This post will demonstrate the how to easily build Custom Java Routines to extend Essbase and dramatically reduce development time.

Complete details will be provided on how to implement a simple customized logging function for use in Calculation Scripts and Business Rules. Essbase’s streamlined, parallel nature makes it difficult for application developers to trace line by line. By using Java to implement a custom logging routine, one may use personalized log entries within their Essbase scripts. Consequently, developers can add tracing to their scripts and quickly determine how Essbase is approaching each calculation. Accordingly, application developers are able to see exactly how the script is being executed – providing quick debugging and faster development time. One powerful feature is to help determine block creation  within FIX statements.

The first step to integrating a custom Java routine into Essbase is to write some simple Java code. It is very easy – the code does not have to include any special APIs for Essbase.  During development, a few issues were encountered where Essbase was a bit picky about how the code is written.  Here are a few tips to help in getting started. These tips were gathered while doing real development, and it is best to follow at first, though you may revisit the items and find out what will work for you.

  • Do not include the code in a package such as “com.company.product_name” – remove the “package” declarative at the top of the code
  • Do not use the keyword “this” to refer to variables
  • Do not overload methods
  • Set all methods and variables to static

With these provisions in mind, the following code can be written to implement a custom logging routine.

CustomLoggerV2.java

import java.io.FileWriter;
import java.io.Writer;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.Calendar;
import java.util.ArrayList;

public class CustomLoggerV2
{

    private static String logFile;
   
    public static int logFilterLevel;
   
    public static void setLogFilterLevel(int logFilterLevel2)
    {
        logFilterLevel = logFilterLevel2;
    }
   
    public static void setLogFilename(String logFilename)
    {
        logFilterLevel = 0;
        logFile = logFilename ;
    }

    public static synchronized void customLog (int logLevel, String message)
    {
        log(logLevel, message);
    }

    private static synchronized void log (int logLevel, String message)
    {
       
       
        // do not log
        if (logLevel < logFilterLevel)
            {
                return ;
            }
       
        try {
       
            Calendar c = Calendar.getInstance();
           
            FileWriter fw = new FileWriter(logFile, true);
               
            fw.write(c.getTime()   ": "   message   "\n");
            fw.close();
             
        } catch (Exception e)
            {
                System.out.println("Error, cannot open , "   logFile);
                e.printStackTrace();
            }
        }

 
}

The code implements three public methods:

  • setLogFilterLevel(int logFilterLevel) – sets the minimum message level to log (think about ERROR=100, WARN=90, INFO=70, DEBUG=0) – so you can easily change the verbosity of the output.
  • setLogFilename(String filename) – The full path to the log file you wish to use
  • customLog(logLevel, String message) – The log message, with its indicated priority

The next step is to package up the code above. It is important to use the same version of Java which is running your Essbase instance. To find the version, look for the JRE being used within the environment, for instance, Hyperion\common\JRE\Sun\1.5.0\bin. To obtain the specific revision, open a cmd prompt, cd to the bin directory, and run “java –version”.

E:\Hyperion\common\JRE\Sun\1.5.0\bin>java -versi
java version "1.5.0_11"

Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_11-b03)

Java HotSpot(TM) Client VM (build 1.5.0_11-b03, mixed mode

To compile the code a JDK is required, which will contain the javac command. Hyperion only packages the JRE, meaning you will have to download the correct JDK in order to compile the code. You can find older versions of Java JDK from Oracle(Sun)’s web site. Once you have obtained the correct version of the JDK, compile and package up the code:

javac CustomLoggerV2.java

jar -cf CustomLoggerV2.jar CustomLoggerV2.class

Next, copy the CustomLoggerV2.jar file into the Essbase file structure:

Copy CustomLoggerV2.jar into the E:\Hyperion\products\Essbase\EssbaseServer\java\udf folder. If the udf folder does not already exist, create it.

Now it is time to start including the Java class within Essbase. Essbase runs within its own JVM and therefore has its own Java security. In the example above, we are writing to a local log file, which will violate the default security policy setup in the udf.policy file. The file is usually found in Hyperion\products\Essbase\EssbaseServer\java . The simplest way to get around the security concerns for development purposes is to remove the comment from the last line in the file, which effectively includes the directive “permission java.security.AllPermission”

permission java.util.PropertyPermission “java.vm.version”, “read”;

permission java.util.PropertyPermission “java.vm.vendor”, “read”;

permission java.util.PropertyPermission “java.vm.name”, “read”;

// Uncomment the following line if you want to remove all restrictions

permission java.security.AllPermission;

};

Now that the Essbase security and jar file have been put in, a restart of the Essbase process is required to register the changes. Please restart Essbase now.

The final step is to run some maxl statements to register the public java methods with Essbase.

CustomLoggerV2.mxl

create or replace function '@JCustomLoggerV2_setLogFilename'

as 'CustomLoggerV2.setLogFilename(String)'

spec '@JCustomLoggerV2.setLogFilename(absolute file name)'

comment 'Nicholas King'

with property runtime;

create or replace function '@JCustomLoggerV2_customLog'

as 'CustomLoggerV2.customLog(int, String)'

spec '@JCustomLoggerV2.customLog(log level, log message)'

comment 'Nicholas King'

with property runtime;


create or replace function '@JCustomLoggerV2_setLogFilterLevel'

as 'CustomLoggerV2.setLogFilterLevel(int)'

spec '@JCustomLoggerV2.setLogFilterLevel(filter level)'

comment 'Nicholas King'

with property runtime;

One final thing… In order to run a custom java function, the value of the result has to be stored in an Essbase member. This is true even if there is not any use for the return value, such as this case where there is no value returned from the Java methods. To get around this, create a new Essbase member called “No Measure” somewhere within your Essbase outline. This will act as a dummy member intended only to direct the return value, if any, of the Java methods. An example is shown below.

Sample Calc Script or Business Rule to Invoke the Logger

//ESS_LOCALE English_UnitedStates.Latin1@Binary

/* SETUP The Logger */

/* Fix on something so it runs only once */

FIX (Actual, Texas, "100-10")

"No Measure" = @JCustomLoggerV2_setLogFilename("E:\CustomEssbaseLog.log");

"No Measure" = @JCustomLoggerV2_setLogFilterLevel(50);

ENDFIX;

/* In your script, do some actual logging */

FIX (Actual, Texas, "100-10")

/* Won’t be displayed */

"No Measure" = @JCustomLoggerV2_customLog(0, "This is a debug message");

/* Will be displayed */

"No Measure" = @JCustomLoggerV2_customLog(50, "This is a normal message");

"No Measure" = @JCustomLoggerV2_customLog(100, "This is an important message");

ENDFIX;

The result of running the script is the log entries will be added to the log file E:\CustomEssbaseLog.log,

Mon Feb 21 01:30:25 EST 2011: This is a normal message

Mon Feb 21 01:30:25 EST 2011: This is an important message

Troubleshooting Tips

A very common error you may receive is,

Error: 1200324 Error compiling formula for [No Measure] (line 8): operator expected after [@JCustomLogger_customLog]

This error is a generic error that indicates something in your custom function is not registered properly.  Unfortunately, there is not a lot of detailed log information at this point to help discover the problem. If you receive this message a few things might help:

  • Retrace your steps – carefully review all instructions above
  • Check that the correct version of Java was used to compile the class file and package the jar
  • Check the jar is in the correct “udf” folder in Essbase
  • Check the syntax of the MAXL to register the functions is correct
  • Simplify your script as much as possible to reduce the possibility of syntax errors

Conclusion

This example shows how to create a custom Java based logger integrated into Essbase. The possibilities are endless – anything that can be done in Java can be added to Essbase. You can create development aids, or even read/modify the values within the cube. For instance, this model has successfully been used to perform complex financial calculations within Hyperion Planning Forms using Business Rules.  It could also be used for integrating Web Services with your cube by reading or writing cube data and interacting with an enterprise Web Service.

 

It is possible for a database in Essbase to become corrupt.  This can be caused by server hangs, software glitches, and a variety of other reasons.  Although infrequent, if a database cannot be loaded for any reason, and it needs to be restored, the following actions can be a quick resolution.  Keep in mind that this will remove the data and it will need to be imported from a backup export.

Before performing this, verify that the database is not attempting to recover.  To determine if this is occuring, open the application log file.  If it states that it is recovering free space, be patient as it may correct itself.

File Structure

Essbase has a simple file structure that it follows.  It can vary with each application depending on the options used.  The area to focus on for this process is below.  The application and database that is being restored would take the place of appname and dbname.

Hyperion\Products\Essbase\EssbaseServer\App\AppName\DbName

Restoring To A Usable State

In this directory, files with the following extensions will need to be removed.  This will delete all of the data  and temporary settings that are causing the application to function improperly.  It will NOT delete the database outline, calc scripts, load rules, or business rules.

  • .ind (index files)
  • .pag (data files)
  • .esm (Essbase kernel file that manages pointers to data blocks, and contains control information that is used for database recovery)
  • .tct (Essbase database transaction control file that manages all commits of data and follows and maintains all transactions)

After these files are removed, verify that the application and database is functioning.  This can be done in Essbase Administration Services by starting the application.  If the application doesn’t start, more research will have to be performed. If the application loads, import the most recent data backup and run an aggregation.

There are a number of other possible file types in this directory.  Below is some information that may be helpful.

Audit Logs

  • .alg:  Spreadsheet audit historical information
  • .atx:  Spreadsheet audit transaction

Temporary Files

  • .ddm:  Temporary partitioning file
  • .ddn:  Temporary partitioning file
  • .esn:  Temporary Essbase kernel file
  • .esr:  Temporary database root file
  • .inn:  Temporary Essbase index file
  • .otm:  Temporary Essbase outline file
  • .otn:  Temporary Essbase outline file
  • .oto:  Temporary Essbase outline file
  • .pan:  Temporary Essbase database data (page) file
  • .tcu:  Temporary database transaction control file

Objects

  • .csc:  Essbase calculation script
  • .mxl:  MaxL script file (saved in Administration Services)
  • .otl:  Essbase outline file
  • .rep:  Essbase report script
  • .rul:  Essbase rules file
  • .scr:  Essbase ESSCMD script

Other

  • .apb:  Backup of application file
  • .app:  Application file, defining the name and location of the application and other application settings
  • .arc:  Archive file
  • .chg:  Outline synchronization change file
  • .db:  Database file, defining the name, location, and other database settings
  • .dbb:  Backup of database file
  • .ddb:  Partitioning definition file
  • .log:  Server or application log
  • .lro:  LRO file that is linked to a data cell
  • .lst:  Cascade table of contents or list of files to back up
  • .ocl:  Database change log
  • .ocn:  Incremental restructuring file
  • .oco:  Incremental restructuring file
  • .olb:  Backup of outline change log
  • .olg:  Outline change log
  • .sel:  Saved member select file
  • .trg:  Trigger definition file.XML (Extensible Markup Language) format
  • .txt:  Text file, such as a data file to load or a text document to link as a LRO used for database recovery
  • .xcp:  Exception error log
  • .xls:  Microsoft Excel file
 

If you have users that rely on SmartView to pull data from your Essbase and/or Planning application, many of them may have large spreadsheets.  One way to improve the perception of the performance of Essbase is the method in which SmartView (client side) communicates with the server.

APS, Planning, and HFM have the ability to take advantage of compression during the communication process.  When large queries, retrieving and submitting data, are initiated, the performance can be significant.

The default compression settings for APS and Planning are not turned on.  The good news is that turning this on is relatively simple.

Find the essbase.properties file on the APS server and change it to false.  The path to this file is different in versions 9 and 11.  In 11, the path is \Products\Essbase\aps\bin.

smartview.webservice.gzip.compression.disable=false

Open the Hyperion Planning application in question and change the SMARTVIEW_COMPRESSION_THRESHOLD in the System Properties (Administration/Manage Properties – System Properties tab) to a value no less than 1.  This threshold is the minimum size of the query in which compression will be used.  So, a value of 1000 would mean compression would be used for anything greater than 1,000 bytes.

For smaller queries, compression may not be necessary.  It may even decrease performance because of the overhead to compress and uncompress the data.  Every environment is different so there is no “right” answer as to what this value should be.

If you have used compression, please share your experiences.

 

Changes to an Essbase outline cause changes to the Essbase index and data files, regardless of the method (Essbase Administration Services, Hyperion Planning database refreshes, or from a script).

Changes that require restructuring the database are time-consuming (unless data is discarded before restructuring).  Understanding the types of restructures and what causes them can help database owners more effectively manage the impacts to users.

TYPES OF RESTRUCTURES

Essbase initiates an implicit restructure after an outline is changed, whether done with the outline editor, through an automated build, or some other fashion like a Hyperion Planning database refresh.  The type of restructure that is performed depends on the type of changes made to the outline.

DENSE RESTRUCTURE:  If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you must recalculate the database. Dense restructuring, the most time-consuming of the restructures, can take a long time to complete for large databases.

SPARSE RESTRUCTURE:  If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.

Sparse restructures are typically fast, but depend on the size of the index file(s).  Sparse restructures are faster than dense restructures.

OUTLINE ONLY:  If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline.

Outline restructures are very quick and typically take seconds.

Explicit restructures occur when a user requests a restructure to occur.  This can be done in Essbase Administration Services or via Maxl (and EssCmd for those of you who still use it) and forces a full restructure (see dense restructure above).  It is worth noting that this also removes empty blocks.

CALCULATING IMPLICATIONS AFTER RESTRUCTURES

When a restructure occurs, every block that is impacted is tagged as dirty.  If Intelligent Calculations are used in the environment, they don’t provide any value when a dense restructure occurs as all blocks will be calculated.  When member names or formulas are changed, the block is not tagged as dirty.

WHAT DICTATES THE RESTRUCTURE TYPE

The following outline changes will force a dense restructure, which is the most time- consuming restructure.

DENSE AND SPARSE

  • Defining a regular dense dimension member as dynamic calc
  • Defining a sparse dimension regular member as dynamic calc or dynamic calc and store
  • Defining a dense dimension dynamic calc member as regular member
  • Adding, deleting, or moving dense dimension dynamic calc and store members
  • Changing dense-sparse properties [Calc Required]
  • Changing a label only property [Calc Required]
  • Changing a shared member property [Calc Required]
  • Changing the order of dimensions [Calc Required]

DENSE (DATA FILES)

  • Deleting members from a dense dimension  [Calc Required]
  • Adding members to a dense dimension
  • Defining a dense dynamic calc member as dynamic calc and store member

SPARSE (INDEX)

  • Adding members to a sparse dimension
  • Moving members (excluding shared members) in a sparse dimension
  • Defining a dense dynamic calc member as dynamic calc and store
  • Adding, deleting, or moving a sparse dimension dynamic calc member
  • Adding, deleting, or moving a sparse dimension dynamic calc and store member
  • Adding, deleting, or moving a dense dimension dynamic calc member
  • Changing the order of two sparse dimensions

NO RESTRUCTURE OCCURS

  • Deleting members of a sparse dimension [Calc Required]
  • Deleting members of an attribute dimension
  • Deleting shared members from a sparse or dense dimension [Calc Required]
  • Adding members to an attribute dimension
  • Adding shared members to a sparse or dense dimension
  • Moving a member in an attribute dimension
  • Renaming a member
  • Changing a member formula [Calc Required]
  • Defining a sparse dynamic calc member as dynamic calc and store member
  • Defining a dense or sparse dynamic calc and store member as dynamic calc
  • Defining a regular dense dimension member as dynamic calc and store
  • Defining a sparse dimension dynamic calc and store member or dynamic calc member as regular member
  • Defining a dense dimension dynamic calc and store member as regular member
  • Changing properties other than dense-sparse, label, or shared [Calc Required]
  • Changing the order of an attribute dimension
  • Creating, deleting, clearing, renaming, or coping an alias table
  • Importing an alias table
  • Setting a member alias
  • Changing the case-sensitive setting
  • Naming a level or generation
  • Creating, changing, or deleting a UDA

WHAT DOES THIS MEAN

Understanding this can help users and administrators manage applications to better meet the needs of all those involved.  When designing an application, knowledge of this topic can be instrumental in the success of the application.  Here are some things to keep in mind.

  • When updating an outline or refreshing a planning application, it may be faster to export level 0 (or input level) data, clear the data, perform the update, and reload/aggregate the export when  changes cause a dense restructure.
  • For dimensions that are updated frequently, it may be beneficial to define those dimensions as sparse.  Changes to sparse dimensions typically require only restructures to the index file(s), which are much faster.
  • If frequent changes are required, enabling incremental restructuring may make sense.  Using this defers dense restructures.  The Essbase restructure happens on a block by block basis, and occurs the first time the data block is used.  The cost is that calculations will cause restructures for all the blocks included and the calculation performance will degrade.
  • Setting the isolation level to committed access may increase memory and time requirements for database restructure.  Consider setting the isolation level to uncommitted access before a database restructure.
  • If multiple people have access to change the outline, outline logging may be useful.  This can be turned on by adding OUTLINECHANGELOG = TRUE in the essbase.cfg.
  • Monitoring progress of a restructure is possible when access to the server is granted.  Both sparse and dense restructures create temporary files that mirror the index and data files.  Data exists in the .pag files while indexes are stored in .ind files.  As the restructure occurs, there are equivalent files for each (pan for data files and inn for index files).  In total, the restructure should decrease the size of the ind and pag files, but the pan and inn files can be used for a general idea of the percent of completion.

 

 

Audit logs, or SSAUDIT, are a crucial component of backing up Hyperion Essbase applications in many environments.  It is the equivalent of a transaction log in a relational database.  To use this effectively, the audit log has to consistently log database changes.

If the audit feature in Hyperion Essbase is used, the following information is absolutely critical to know to effectively manage this feature.  If the application is on a shared environment where multiple groups/people are administering the applications, it is critical that everybody understands this, and plays nicely together!

The audit logs are turned off without any notification when the following actions occur on an Essbase server.  To turn the audit feature back on, the Essbase application in question has to be stopped and started.  It is not required to cycle the Essbase service.

  • Any operation that causes a database restructure.
  • The creation of a new application
  • The creation a new database
  • Copying a database
  • Renaming a database

After any of these operations occurs on the server, stop and start all applications that use the audit feature.