1

Creating Hierarchies & Groupings In Excel – One Click Solution

A lot of users like to see hierarchies in Excel and build groupings around these hierarchies so they can be collapsed and expanded easily.  It is not a huge deal to do this for things that don’t change a lot, like months rolling to a quarter, but it can be extremely cumbersome to maintain for organizational or account hierarchies that are large or change frequently.

By adding some VBA code (a macro) to your workbook, managing groupings can be completely automated.  This can be customized for a plethora of different scenarios.  Below are 2 examples that Hyperion users will encounter.  One caveat to this is that Excel limits the number of grouping levels to 8.  If the worksheet has more than 8 levels, the following logic would not provide the expected result.

Creating a Hierarchy Based On Excel Indents

If a spreadsheet exists where the hierarchy is created with the indent (not multiple columns) feature of Excel, select the range for the groupings to be applied.  Execute the following script.  Basically, this loops through the cells you have selected and will create the groupings based on the number of indents in the cell.

Sub CreateGroupingsOnIndents()

Dim cell As Range
For Each cell In Selection
    If cell.IndentLevel <> 0 Then
        cell.EntireRow.OutlineLevel = cell.IndentLevel
    Else
        cell.EntireRow.ClearOutline
    End If
Next

End Sub

Creating a Hierarchy Based On SmartView/Excel Add-In Indents

When retrieving from Essbase, cells are indented by adding 5 spaces to the member name.  By getting the length of the cell, subtracting the number of spaces preceding the member name, and dividing the result by 5, the level of the indent is identified.  Select the cells with the member names and execute the following.

Sub CreateGroupingsOnSpaces()

Dim cell As Range
Dim iLength1 As Integer
Dim iLength2 As Integer
Dim iIndent As Integer

For Each cell In Selection
    iLength1 = Len(cell.Value)
    iLength2 = Len(LTrim(cell.Value))
    iIndent = (iLength1 - iLength2) / 5
    If iIndent <> 0 Then
        cell.EntireRow.OutlineLevel = iIndent
    Else
        cell.EntireRow.ClearOutline
    End If
Next

End Sub

Setup a Module

If you are unfamiliar with adding custom code to an Excel workbook, follow the steps below.

Excel 2000 and below

  1. Select Tools/Macro/Visual Basic Editor
  2. Right click on the workbook in the Project window, and select Insert/Module
  3. Expand the module folder and open the new module (likely module1)
  4. Paste the example above in this window to the right
  5. Execute it by clicking F5 or the green play triangle in the toolbar

Excel 2003 and greater

  1. Select the Navigation Wheel, and check the “Show Developer tab in the Ribbon” checkbox in the Popular tab
  2. Select the Developer Ribbon and click Visual Basic
  3. Right click on the workbook in the Project window, and select Insert/Module
  4. Expand the module folder and open the new module (likely module1)
  5. Paste the example above in this window to the right
  6. Execute it by clicking F5 or the green play triangle in the toolbar

These can also be associated to a custom menu or toolbar if you choose to take the extra step!




Learning Life Cycle Management (LCM): Command Line Security Synchronization

This purpose of this article is to introduce the command line Life Cycle Management(LCM) utility in Oracle EPM. The LCM tool can be used to export and import objects that can be found within the Oracle EPM Environment.   This includes Security, Essbase, Hyperion Planning, Financial Management … etc.  As once gets more familiar with LCM, one comes to realize how powerful the tool is and how empty life without LCM was. Without LCM some of the more detailed artifacts within an application were difficult to move between environments.  LCM provides a centralized mechanism for exporting and importing nearly all of the objects within an Oracle EPM application or module. The table below is listed to get an idea of all the facets of LCM.

 

Application Artifacts by Module

Module Artifacts
Shared Services User and Group Provisioning
Projects/Application Metadata
Essbase Files (.csc, .rpt, .otl, .rul)
Data
Filters
Partitions
Index and Page files (drive letters)
Application and Database properties
Security
EAS/Business Rules Rules
Locations
Sequences
Projects
Security
Hyperion Planning Forms
Dimensions
Application Properties
Security
Hyperion Financial Management Metadata
Data
Journals
Forms/Grids
Rules
Lists
Security
Financial Data Quality Management Maps
Security
Data
Metadata
Scripts
Security
Reporting and Analysis (Workspace) Reports
Files
Database Connections
Security

 

The LCM tool is integrated into the Shared Services Web Interface.  If can be found under the Application Groups tab. Within the application groups there are three main areas of interest:

  1. Foundation – includes Shared Services security such as Users/Groups and Provisioning.
  2. File System – This is where the exported files will go by default. The default location is to be stored server side, on the Shared Services server in the location: E:\Hyperion\common\import_export
    Under this main folder, the contents are broken out by the user account that performed the export. Within the export folder, there is an “info” folder and a “resource” folder. The info folder provides an xml listing of the artifacts contained within the export. The resource folder contains the actual objects that were exported.

    The LCM Command line tool provides more flexibility because it can be installed on any machine and the results can be directed to output to any local folder. Sometimes this is very useful if the Shared Services node is a Unix machine, and the LCM users are unfamiliar with Unix. Simply install the LCM Command Line Utility on the Windows machine and redirect its output to a local Windows folder using the –local command line option.

  3. Products and Applications – Each registered product will be listed and provide a mechanism to export and import the respective objects for the associated applications, Essbase, Planning…etc.

 

Going Command Line

The Shared Services LCM GUI is a great way to become familiar with the LCM tool. However, when it is time to start automating LCM tasks and debugging issues, the Command Line LCM utility is very helpful. To get started, the LCM Command Line tool requires a single command line argument, an xml file that contains the migration definition. The quickest way to obtain the xml file is to use the Shared Services LCM Web interface to select the objects you wish, select Define Migration to pull up the LCM Migration Wizard, and follow the prompts until the last step. Two options are presented, “Execute Migration” or “Save Migration Definition”. Choose “Save Migration Definition” to save the migration definition to a local file.

 

That is pretty much all there is to it… move the xml migration definition file to the location you have installed LCM. For instance, \Hyperion\common\utilities\LCM\9.5.0.0\bin, open a command line and run Utility.bat as indicated:

E:\Hyperion\common\utilities\LCM\9.5.0.0\bin>Utility.bat SampleExport.xml
Attempting to load Log Config File:../conf/log.xml
2011-03-20 11:50:49,015 INFO  - Executing package file - E:\Hyperion\common\util
ities\LCM\9.5.0.0\bin\SampleExport.xml
>>> Enter username - admin
>>> Enter Password----------
--2011-03-20 11:50:57,968 INFO  - Audit Client has been created for the server h
ttp://hyp13:58080/interop/Audit
2011-03-20 11:50:58,421 WARN  - Going to buffer response body of large or unknow
n size. Using getResponseBodyAsStream instead is recommended.
2011-03-20 11:51:03,421 INFO  - Audit Client has been created for the server htt
p://hyp13:58080/interop/Audit
2011-03-20 11:51:03,437 INFO  - MIGRATING ARTIFACTS FROM "Foundation/Shared Serv
ices" TO "/SampleExport"
2011-03-20 11:51:32,281 INFO  - Message after RemoteMigration execution - Succes
s. HSS log file is in - E:\Hyperion\common\utilities\LCM\9.5.0.0\logs\LCM_2011_0
3_20_11_50_48_0.log
2011-03-20 11:51:32,687 INFO  - Migration Status - Success

E:\Hyperion\common\utilities\LCM\9.5.0.0\bin>


LCM Example: Synchronizing Shared Services Security between Environments

LCM often requires moving objects and security between environments, such as from a development environment to a production environment. While LCM makes it easy, it is not as straightforward as simply running an export from one environment and importing into another environment. The reason is that LCM imports work in a “create/update” mode. In other words, the operations performed in LCM are typically additive in nature. While the typical LCM method would capture new users and new application provisioning, it will not handle removing user provisioning, removing or changing groups, or essentially removing users from the system. This can be an easy oversight, but it will ensure that the security becomes out of sync over time and can cause issues as well as security implications. At a high level, the steps to sync provisioning using LCM would be:

  1. Export Users/Groups/Provisioning from Source Environment
  2. Export Users/Groups from Target Environment
  3. Delete Using Step 2 Results the Users/Groups in Target Environment
  4. Import Users/Groups/Provisioning into Target Environment

Essentially, Step 1 and 4 are the typical import/export operations – where security is exported from one environment and imported into another environment. However, two additional steps are necessary. In Step 3, the users and groups in the target environment are deleted, removing provisioning too. This leaves an empty, clean environment to then import security, ensuring no residual artifacts remain in the environment. To use the LCM delete operation, a list of items to be deleted must be supplied. This is where Step 2 comes in, a simple export of the Users and Groups in the Target environment will provide the necessary information to provide to Step 3 – deleting the respective users and groups.

Below are some sample XML migration definitions for each step:

 

Step 1 – Export Users/Groups/Provisioning from Source Environment

Note: By default the results will be sent to the source Shared Services server in the “import_export” directory. You can use LCM to redirect the output to keep the results all in the same environment (the target system) by using the command line option [-local/-l] (run utility.bat without any command line options to see help for your version of LCM). Simply redirect the results into the local folder, \Hyperion\common\import_export, in the Target system.

<?xml version=”1.0” encoding="UTF-8"?>
<Package name="web-migration" description="Migrating Shared Services to File System ">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://sourceSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="FileSystem-Connection1" type="FileSystem" description="File system connection" HSSConnection="MyHSS-Connection1" filePath="/Step1ExportFromSource"/>
        <ConnectionInfo name="AppConnection2" type="Application" product="HUB" project="Foundation" application="Shared Services" HSSConnection="MyHSS-Connection1" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="AppConnection2">
                <Options>
                    <optionInfo name="userFilter" value="*"/>
                    <optionInfo name="groupFilter" value="*"/>
                    <optionInfo name="roleFilter" value="*"/>
                </Options>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="true" parentPath="/Native Directory/Assigned Roles" pattern="*"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="FileSystem-Connection1">
                <Options/>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 2 – Export Users / Groups from Target Environment

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating Shared Services to File System ">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="FileSystem-Connection1" type="FileSystem" description="File system connection" HSSConnection="MyHSS-Connection1" filePath="/Step2UsersGroupsTarget"/>
        <ConnectionInfo name="AppConnection2" type="Application" product="HUB" project="Foundation" application="Shared Services" HSSConnection="MyHSS-Connection1" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="AppConnection2">
                <Options>
                    <optionInfo name="userFilter" value="*"/>
                    <optionInfo name="groupFilter" value="*"/>
                </Options>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="FileSystem-Connection1">
                <Options/>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 3 – Delete Users/Groups in Target Environment

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating File System to Shared Services">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="AppConnection1" type="Application" product="HUB" description="Destination Application" HSSConnection="MyHSS-Connection1" project="Foundation" application="Shared Services"/>
        <ConnectionInfo name="FileSystem-Connection2" type="FileSystem" HSSConnection="MyHSS-Connection1" filePath="/Step2UsersGroupsTarget" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="FileSystem-Connection2">
                <Options/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Users"/>
                <Artifact recursive="false" parentPath="/Native Directory" pattern="Groups"/>
            </Source>
            <Target connection="AppConnection1">
                <Options>
                    <optionInfo name="operation" value="delete"/>
                    <optionInfo name="maxerrors" value="100"/>
                </Options>
            </Target>
        </Task>
    </Tasks>
</Package>

Step 4 – Import Users and Groups into Clean Target Environment

This step assumes that Step 1 was redirected onto the target environment within the import_export directory. The respective folder, Step1UsersGroupsSource, can also be manually copied from the source to the target environment without using the redirection to a local folder technique.

<?xml version="1.0" encoding="UTF-8"?>
<Package name="web-migration" description="Migrating File System to Shared Services">
    <LOCALE>en_US</LOCALE>
    <Connections>
        <ConnectionInfo name="MyHSS-Connection1" type="HSS" description="Hyperion Shared Service connection" url="http://targetSvr:58080/interop" user="" password=""/>
        <ConnectionInfo name="AppConnection1" type="Application" product="HUB" description="Destination Application" HSSConnection="MyHSS-Connection1" project="Foundation" application="Shared Services"/>
        <ConnectionInfo name="FileSystem-Connection2" type="FileSystem" HSSConnection="MyHSS-Connection1" filePath="/Step1UsersGroupsSource" description="Source Application"/>
    </Connections>
    <Tasks>
        <Task seqID="1">
            <Source connection="FileSystem-Connection2">
                <Options/>
                <Artifact recursive="true" parentPath="/Native Directory" pattern="*"/>
            </Source>
            <Target connection="AppConnection1">
                <Options>
                    <optionInfo name="operation" value="create/update"/>
                    <optionInfo name="maxerrors" value="100"/>
                </Options>
            </Target>
        </Task>
    </Tasks>
</Package>

Troubleshooting with Command Line LCM

LCM can be a great tool when it works flawlessly. However, it can quickly become part of mission critical activities like promoting artifacts from development to production. Consequently, it is necessary to learn some troubleshooting skills to maintain business continuity using LCM.

  1. Review the output of the LCM operation. Usually it will provide some detail about the error that was received.
  2. Review the server side Shared_services_LCM.log in ORACLE_HOME\logs\SharedServices\SharedServices_LCM.log
  3. Turn on debugging for the command line LCM tool. Change the line “info” to “debug” in the files
    E:\Hyperion\common\utilities\LCM\9.5.0.0\conf in log.xml and hss-log.xml
    <param name=”Threshold” value=”info” />
  4. Use Google, the Oracle Knowledgebase to search for more information.
  5. Try only a subset of the initial objects. For instance, Essbase can export a number of objects, Outline, Calc Scripts, Rule Files, Report Scripts, Substation Variables, Location Aliases, and Security. Try one at a time to determine which part of the whole is failing.
  6. Restart the environment. LCM is an emerging technology and can sometimes just be in a bad state. I’ve seen countless LCM issues where bouncing the environment clears the issue up.
  7. Look for special characters that might be present in your data. LCM is a java tool and uses xml and text files to transmit data. There are instances where special characters can mess up the parsing.
  8. Look for patches – as mentioned previously, LCM is an emerging technology and is still somewhat buggy (especially older versions). Check release notes in patches for enhancements/bug fixes in LCM.



Hyperion Troubleshooting and Debugging Guide Part 2 of 2

This section will talk about how to dive into debugging critical issues with Oracle EPM.

Start a Problem Log

The most useful habit to develop during issue resolution is to start a detailed log about the issue. Some problems can take days or weeks to resolve and require trying hundreds of different prospective resolution attempts. It is easy for a “small” problem to become a long winded issue. Consequently, it is hard to foresee when the issue will resemble the analogous onion: keep peeling off layers and finding more and more to fix. If the problem log is created initially, all the important details can be captured. Additionally, it is much easier to bring others up to speed (management) and create support tickets when all of the information is documented. This log should include the error as the end user sees it, the error from any logs you are able to capture, screenshots, timestamps, and things that you have tried along with the results.

Reproduce the Issue

The first thing to find out is whether the issue is reproducible. It is very difficult to solve an issue that is not reproducible. Many errors are simply ‘glitches’ and may have been caused by a very improbable event, such as a database hiccup. For instance, a database problem propagates into the Oracle EPM system, forcing it into a bad state. Such a problem may never produce itself again. Consequently, an initial step toward resolution is to restart the Oracle EPM services to bring them back into a ‘known state’. If the problem is not immediately reproducible after the restart, go back to the problem log and record everything you can. This type of issue will need to be profiled over a period of time to try and discover patterns if it occurs again.

The Numerous Logs

Once it is discovered that the issue is not a simple glitch, it is time to start digging. As mentioned previously, the first place to track down the cause of an issue is in the logs. The logs come in various forms. Here is a general breakdown of the log types:

 General

Log Type Description
Windows Event Viewer This is helpful for general system related messages. Also some modules built on Windows Technology (DCOM) will log messages here. For example, Financial Management (HFM) and Financial Data Quality Management (FDQM).
Application Logs The application logs are actually generated by the Hyperion code itself. These often contain the most useful information.
Application Server Logs This type of log pertains to a Java based Web Application. Most of the Hyperion modules with a web based front end have Application Server Logs. The Application Server Logs run within the WebLogic, Tomcat, or WebSphere container.
Web  Server logs The web server controls the handoff of web requests between the Hyperion Modules. The best way to use this log is to look for error codes (404, 401… etc) in the web log and review the corresponding URL that was used to ensure it is correct. Sometimes it might be obvious that the URL in the web log has the wrong domain, points to the wrong server, or cannot resolve the context.

 

Start by reviewing the log for the product where the error is occurring. The Application Logs and Application Server Logs will be most useful at first. The goal is to find a useful error message that can be used in the next process to find a resolution to the problem.

Common Log Locations:

Unfortunately, the actual log locations change drastically between recent versions of Oracle/Hyperion products. As stated before, searching for *.log might be useful.

Example Application Server Logs:

Essbase Admin Services Svr2 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/EssbaseAdminServices0/logs
Workspace Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/FoundationServices0/logs
Financial Reporting Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/FinancialReporting0/logs
Analytic Provider Services Svr2 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/AnalyticProviderServices0/logs
Web Analysis Svr1 /Oracle/Middleware/user_projects/domains/EPMSystem/servers/WebAnalysis0/logs

 

Example Application Logs

Reporting and Analysis Core Svr3 /Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/ReportingAnalysis/

 

/Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/ReportingAnalysis/stdout_console_default.log

 

Essbase Svr4 /Oracle/Middleware/user_projects/epmsystem1/diagnostics/logs/essbase/

 

 

Sifting Through the Logs:

It helps to know which modules depend on each other in order quickly pick out the respective log files to analyze. The basic idea is to determine which products are interacting and to review each log in detail for messages. It is important to review the logs of the product not only during runtime (as it is happening), but also during startup. Sometimes the fastest way to cut out the fluff is to stop the services, move or delete all the existing logs and start the environment back up. This ensures any log messages are relevant to the issue. Alternatively, one has to sift through potentially large logs looking for timestamps to ensure relevance, which can be daunting.

Product Depends On
Shared Services Relational Database, MSAD/LDAP
Lifecycle Management (LCM) Shared Services, LCM Source/Target applications
Essbase Shared Services
Hyperion Planning Shared Services, Essbase, Business Rules, Relational Database per App
Business Rules Shared Services, Hyperion Planning, Essbase, Relational Database (single database)
Hyperion Financial Management Shared Services, Relational Database (single database), DCOM (Event Viewer)
Financial Data Quality Management (FDM) Shared Services, Relational Database per App, Adapters for Essbase, Planning, HFM…etc, DCOM (Event Viewer)
Strategic Finance Shared Services, Relational Database (optional)
Data Relationship Management Shared Services, Database Client, Adapters, DCOM (Event Viewer)

 

Found an Error Message!

After discovering the error message, the first thing to ask is does this message make any sense? Try to use it within the context of your problem to solve the issue. Often, it is necessary to use external resources to resolve the issue. Use resources like Google, the Oracle Support Knowledgebase, and the Oracle Forums to further research the issue. Most often there will be information regarding your issue.

Note: If possible do not searching using end user messages, i.e. what the user sees when encountering the error.  Rather, find a detailed message in the logs. The end user messages are usually very generalized and can provide misleading information because of the vast number of issues which might match the general error message.

If there is still a struggle to discover a useful error message, most of the Hyperion modules use a logging mechanism that can be changed into debug mode. The actual method will differ based on product, for instance, most modules use log4j and there is often a .properties file you can change the logging level from “ERROR” or “WARN” to “DEBUG”. For instance, to enable debugging in Hyperion Planning: log into a Planning application, go to Administration -> Manage Properties, Select the System tab, Add the property DEBUG_ENABLED with a value of True. After changing the logging level, the service will need to be restarted to reflect the changes. Turning on application debugging should provide more context clues around what the product is doing at the time and help pinpoint the error.

 

Nothing Found…

If these resources do not help, an Oracle Support Ticket may be required. Additionally, the Oracle Forums can be a good place to post a question.  When creating a support ticket and posting to a forum, please include as much information as possible. This is where the Problem Log will come in handy.

This is a good time to look for updates and patches to the product. Check for patches and updates on http://support.oracle.com. Read the release notes for anything matching your problem. Even if there is nothing coming up, some obscure errors can be solved by simply applying a patch. Not all bugs will be in the release notes for the patch. Oracle’s hpatch process is pretty straight forward, but older environments might take some time to apply the patch. Always read the entire release notes and installation instructions before applying a patch. Also, sometimes patches are not as proven as the initial installers. This is because some patches may have just been release and only tested with a handful of clients. So ensure there is a good backup process in case the patch causes unintended problems. The Oracle hpatch process has a back out feature, but it is not always useful if the patch is half way installed and failed.

Finally, the last part of troubleshooting is intuition. As more problems are encountered and resolved, one can become more confident in resolving upcoming issues. There is no way to have encountered every issue and know the resolution, so the best that you can do is arm yourself with a good knowledge of the architecture, have a set of best practices, and lots of patience for problem solving.




Oracle EPM Troubleshooting and Debugging Guide (Part 1 of 2)

This article will discuss some best practices around troubleshooting and debugging your Oracle EPM environment.

Oracle EPM (Hyperion) is a complex system that is composed many modules that work together to perform different functions. Typically, in design, creating a modular approach is a best practice. However, the Oracle EPM modules were created with disparate architectures because they acquired over time through purchasing technology from many smaller companies. As a result, the modules interact only at a high level, through network APIs. The result is a complex system with many breaking points, and error messages that can be less than informative. Consequently, troubleshooting and debugging Hyperion is an art form.

The fastest way to debug a critical problem is to have a good understanding of what components represent a known, “good” state.  Only by identifying the abnormal element in the system can one start to resolve a critical issue.

Below Are Best Practices to Follow to Document a Known, Good State

What Should Be Running?

The first question you should ask is, “is everything up?” Typically you would start by checking the URLs you use for each product – ensuring you can login. These URLs would typically include the load balancing name.

Example of End User/Administrative URLs

Shared Services http://hyperion.svr.com/interop
Hyperion Planning http://hyperion.svr.com/HyperionPlanning
Essbase Admin Services  http://hyperion.svr.com/easconsole/console.html
Workspace http://hyperion.svr.com/workspace
Web Analysis http://hyperion.svr.com/WebAnalysis
FDM http://hyperion.svr.com/HyperionFDM

Set Up Port Monitoring Software

You can also get a ton of info in a quick glance by setting up port monitoring software. Remember to include the relational database in your monitoring. Sometimes the DBA will take the database down unexpectedly, or the database user IDs will expire. Additionally, it is easy to have 20-30 Hyperion related services or processes running per environment distributed among multiple hosts. It’s too time consuming and error prone to check manually.

An Example of a Custom Java Based Port Monitor

To gather a list of ports within the environment there are a few aids to use for reference. Oracle EPM Version 11 includes a diagnostics web form that will show the status of some of the products. This can be found under the Windows Start Menu, under Programs -> Foundation. This might be a good place to get started.

Also, refer to the Oracle EPM Documentation, Install Start Here Document for typical port reference.

http://download.oracle.com/docs/cd/E17236_01/epm.1112/epm_install_start_here.pdf

An Example of Some Common JVM Ports

Application Server Cluster/Node(s) Port
Workspace svr01 45000
Web Analysis svr01 16000
Financial Reports svr01 8200
Shared Services svr01 58080
Analytic Provider Services svr04 13080
Hyperion Planning svr02 8300
Strategic Finance svr04 7750
Essbase Admin Services svr04 10080
EPMA svr02 19091

Stopping and Starting the Environment

Operational procedures are important too. This means having a reliable start and stop procedure for the environment. Given the complexity of the Hyperion environment, there is no excuse for problems occurring while bringing an environment back up after routine maintenance. One common issue that comes up is a service did not start completely, which can be found by using the port monitoring method above. Also, sometimes the services do not fully stop – causing residual processes to mess up the restart procedure. This might include extra ESSVR processes indicating the Essbase application did not stop properly, or a JVM process which is hung. As a precaution, check the port monitoring software after stopping the environment to ensure all the components are indeed down. It is time consuming when this type of issue is encountered. In a Windows environment you might take the environment down through your normal process, then kill any remaining “Java” and “Hyperion” related processes using task manager. In a Unix environment, after stopping the environment, “kill -9 -1” can be used as a last resort. A reboot will always solve this type of issue, though not often actually needed; it might be faster as a last resort for the unskilled admin needing to fix unexplainable issues.

Functional Validation Script

Functional validation is a set of very simple actions that can be performed within a few minutes to validate the functionality of the environment. This is absolutely necessary to perform before handing the system back over to end users after a routine maintenance.  This task can be either performed by the Hyperion Admin or handed over to the Help Desk. It should, at a minimum, include running a few types reports (HFM, FR, WA), log into Essbase, Planning, HFM, etc… As you discover other reoccurring issues you may want to include more specific checks to ensure common “gotchas” are ironed out before handing over to the end users.

Knowledge of the Architecture

The Hyperion administrator has to be technologically savvy. However, many times Hyperion will tell you exactly what the problem is. The first place to look when encountering an issue is the logs! Take time to find and document the log files for every product in your environment.  Most administrators are a bit intimidated by the number of logs generated by the system, but this is the first place to look. Familiarize yourself with the various logs in the environment. This includes WebLogic logs, JVM logs, Hyperion logs, and operating system logs. If you are lost, a good place to start is search for *.log, sort by date, and look at the directory for the product name. You might even keep a record of each log after starting the service successfully so that you can compare to a good state if there is a problem.
Additionally, it is necessary to have context around how the environment works. This can be achieved by reading the product manuals, looking at architecture diagrams, and attending training sessions. Get involved in the environment. Try to understand both the technical details (ports, logs, different components…etc) and the functional basics (create a planning form, edit an outline, load data into Essbase, run a consolidation) of each Oracle Module.

A Good State: Create a Knowledge Base

Hyperion issues commonly come up more than once. Create a shared, searchable, knowledgebase to track issues and their resolutions. This will help you train your team as it expands and changes over time.

Final Comments

The focus of this article was to provide the Oracle EPM Administrator with best practices on how to document the current state of the Oracle EPM System. This is the most important part of being able to understand complex issues as they come up. More technical detail will be provided in the Oracle EPM Troubleshooting and Debugging Guide (Part 2 of 2).




Navigating Misaligned Menus in Workspace with IE8

 

If you have recently upgraded your web browser to IE8 and attempted to use Hyperion Workspace, you’ve likely encountered difficulty in navigating the interface. When a user clicks on the wheel in the upper left hand corner, selects “Applications,” they can’t see the menu as it apearrs to be condensed. 

 

This issue can be quickly resolved by making a browser modification to the IE security. It is best to run this settings change past your IT department to ensure it will not open holes in your company’s security profile.

 

To correct this issue, 

 

1.       Open IE8

 

2.       select “Tools”->”Internet Options”

 

3.       Select the “Security” tab and “Custom Level”.

 

4.       You will have 4 zones where security changes can be performed (Internet, Local Internet, Trusted sites, Restricted sites). The change can be made to each of these 4 zones if necessary, but it’s possible only one zone needs modified. Test the combination that works for your environment and fits your companies IT/security profile.

 

5.       Select the “Internet” zone and scroll down the menu of options to find “Allow script-initiated windows without size or position constraints”. This will likely be set to “Disable”… select “Enable” and click “OK”. (If prompted, accept the change just made to the zone security).

 

6.       Select “OK” from the security tab to finalize the modification.

 

7.       Test the modification to verify the change worked as intended. As noted above, you might need to enable this setting on multiple zones in order for the setting to take effect.




Unique Essbase Customizations Using Java – Custom Logging

Introduction

Many companies have in depth working knowledge of Hyperion Essbase and are looking to enhance their enterprise reporting capabilities to the next level. Companies typically have specific processes and calculations that set them apart in their industry. However, they are often limited to basic reporting capabilities provided by the standard functions in Essbase. Additionally, complex operations can quickly become arduous using Calculation Scripts and Business Rules. This post will demonstrate the how to easily build Custom Java Routines to extend Essbase and dramatically reduce development time.

Complete details will be provided on how to implement a simple customized logging function for use in Calculation Scripts and Business Rules. Essbase’s streamlined, parallel nature makes it difficult for application developers to trace line by line. By using Java to implement a custom logging routine, one may use personalized log entries within their Essbase scripts. Consequently, developers can add tracing to their scripts and quickly determine how Essbase is approaching each calculation. Accordingly, application developers are able to see exactly how the script is being executed – providing quick debugging and faster development time. One powerful feature is to help determine block creation  within FIX statements.

The first step to integrating a custom Java routine into Essbase is to write some simple Java code. It is very easy – the code does not have to include any special APIs for Essbase.  During development, a few issues were encountered where Essbase was a bit picky about how the code is written.  Here are a few tips to help in getting started. These tips were gathered while doing real development, and it is best to follow at first, though you may revisit the items and find out what will work for you.

  • Do not include the code in a package such as “com.company.product_name” – remove the “package” declarative at the top of the code
  • Do not use the keyword “this” to refer to variables
  • Do not overload methods
  • Set all methods and variables to static

With these provisions in mind, the following code can be written to implement a custom logging routine.

CustomLoggerV2.java

import java.io.FileWriter;
import java.io.Writer;
import java.io.PrintWriter;
import java.io.StringWriter;
import java.util.Calendar;
import java.util.ArrayList;

public class CustomLoggerV2
{

    private static String logFile;
   
    public static int logFilterLevel;
   
    public static void setLogFilterLevel(int logFilterLevel2)
    {
        logFilterLevel = logFilterLevel2;
    }
   
    public static void setLogFilename(String logFilename)
    {
        logFilterLevel = 0;
        logFile = logFilename ;
    }

    public static synchronized void customLog (int logLevel, String message)
    {
        log(logLevel, message);
    }

    private static synchronized void log (int logLevel, String message)
    {
       
       
        // do not log
        if (logLevel < logFilterLevel)
            {
                return ;
            }
       
        try {
       
            Calendar c = Calendar.getInstance();
           
            FileWriter fw = new FileWriter(logFile, true);
               
            fw.write(c.getTime()   ": "   message   "\n");
            fw.close();
             
        } catch (Exception e)
            {
                System.out.println("Error, cannot open , "   logFile);
                e.printStackTrace();
            }
        }

 
}

The code implements three public methods:

  • setLogFilterLevel(int logFilterLevel) – sets the minimum message level to log (think about ERROR=100, WARN=90, INFO=70, DEBUG=0) – so you can easily change the verbosity of the output.
  • setLogFilename(String filename) – The full path to the log file you wish to use
  • customLog(logLevel, String message) – The log message, with its indicated priority

The next step is to package up the code above. It is important to use the same version of Java which is running your Essbase instance. To find the version, look for the JRE being used within the environment, for instance, Hyperion\common\JRE\Sun\1.5.0\bin. To obtain the specific revision, open a cmd prompt, cd to the bin directory, and run “java –version”.

E:\Hyperion\common\JRE\Sun\1.5.0\bin>java -versi
java version "1.5.0_11"

Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_11-b03)

Java HotSpot(TM) Client VM (build 1.5.0_11-b03, mixed mode

To compile the code a JDK is required, which will contain the javac command. Hyperion only packages the JRE, meaning you will have to download the correct JDK in order to compile the code. You can find older versions of Java JDK from Oracle(Sun)’s web site. Once you have obtained the correct version of the JDK, compile and package up the code:

javac CustomLoggerV2.java

jar -cf CustomLoggerV2.jar CustomLoggerV2.class

Next, copy the CustomLoggerV2.jar file into the Essbase file structure:

Copy CustomLoggerV2.jar into the E:\Hyperion\products\Essbase\EssbaseServer\java\udf folder. If the udf folder does not already exist, create it.

Now it is time to start including the Java class within Essbase. Essbase runs within its own JVM and therefore has its own Java security. In the example above, we are writing to a local log file, which will violate the default security policy setup in the udf.policy file. The file is usually found in Hyperion\products\Essbase\EssbaseServer\java . The simplest way to get around the security concerns for development purposes is to remove the comment from the last line in the file, which effectively includes the directive “permission java.security.AllPermission”

permission java.util.PropertyPermission “java.vm.version”, “read”;

permission java.util.PropertyPermission “java.vm.vendor”, “read”;

permission java.util.PropertyPermission “java.vm.name”, “read”;

// Uncomment the following line if you want to remove all restrictions

permission java.security.AllPermission;

};

Now that the Essbase security and jar file have been put in, a restart of the Essbase process is required to register the changes. Please restart Essbase now.

The final step is to run some maxl statements to register the public java methods with Essbase.

CustomLoggerV2.mxl

create or replace function '@JCustomLoggerV2_setLogFilename'

as 'CustomLoggerV2.setLogFilename(String)'

spec '@JCustomLoggerV2.setLogFilename(absolute file name)'

comment 'Nicholas King'

with property runtime;

create or replace function '@JCustomLoggerV2_customLog'

as 'CustomLoggerV2.customLog(int, String)'

spec '@JCustomLoggerV2.customLog(log level, log message)'

comment 'Nicholas King'

with property runtime;


create or replace function '@JCustomLoggerV2_setLogFilterLevel'

as 'CustomLoggerV2.setLogFilterLevel(int)'

spec '@JCustomLoggerV2.setLogFilterLevel(filter level)'

comment 'Nicholas King'

with property runtime;

One final thing… In order to run a custom java function, the value of the result has to be stored in an Essbase member. This is true even if there is not any use for the return value, such as this case where there is no value returned from the Java methods. To get around this, create a new Essbase member called “No Measure” somewhere within your Essbase outline. This will act as a dummy member intended only to direct the return value, if any, of the Java methods. An example is shown below.

Sample Calc Script or Business Rule to Invoke the Logger

//ESS_LOCALE English_UnitedStates.Latin1@Binary

/* SETUP The Logger */

/* Fix on something so it runs only once */

FIX (Actual, Texas, "100-10")

"No Measure" = @JCustomLoggerV2_setLogFilename("E:\CustomEssbaseLog.log");

"No Measure" = @JCustomLoggerV2_setLogFilterLevel(50);

ENDFIX;

/* In your script, do some actual logging */

FIX (Actual, Texas, "100-10")

/* Won’t be displayed */

"No Measure" = @JCustomLoggerV2_customLog(0, "This is a debug message");

/* Will be displayed */

"No Measure" = @JCustomLoggerV2_customLog(50, "This is a normal message");

"No Measure" = @JCustomLoggerV2_customLog(100, "This is an important message");

ENDFIX;

The result of running the script is the log entries will be added to the log file E:\CustomEssbaseLog.log,

Mon Feb 21 01:30:25 EST 2011: This is a normal message

Mon Feb 21 01:30:25 EST 2011: This is an important message

Troubleshooting Tips

A very common error you may receive is,

Error: 1200324 Error compiling formula for [No Measure] (line 8): operator expected after [@JCustomLogger_customLog]

This error is a generic error that indicates something in your custom function is not registered properly.  Unfortunately, there is not a lot of detailed log information at this point to help discover the problem. If you receive this message a few things might help:

  • Retrace your steps – carefully review all instructions above
  • Check that the correct version of Java was used to compile the class file and package the jar
  • Check the jar is in the correct “udf” folder in Essbase
  • Check the syntax of the MAXL to register the functions is correct
  • Simplify your script as much as possible to reduce the possibility of syntax errors

Conclusion

This example shows how to create a custom Java based logger integrated into Essbase. The possibilities are endless – anything that can be done in Java can be added to Essbase. You can create development aids, or even read/modify the values within the cube. For instance, this model has successfully been used to perform complex financial calculations within Hyperion Planning Forms using Business Rules.  It could also be used for integrating Web Services with your cube by reading or writing cube data and interacting with an enterprise Web Service.




Initial Release of In2Hyp Services Manager – Restarting Services Just Got Easier!

As Hyperion applications have become more integrated, the need for multiple servers to support the environment is now required.  Although the recent releases of version 11 are less dependent on the order in which the services start, Hyperion still recommends a specific order to start the services so they perform correctly.  As these services are typically on multiple servers, it is time consuming to perform this operation and it is prone to error.  Many organizations lack the ability to automate this task when services are interrupted, or operating systems are updated, and there is no automated way to start and stop the services.

In2Hyperion is now making available HypServicesManager.  HypServicesManager is an application that will start or stop the services in a predefined order.  An XML file holds the server/service order so the services to be included, and the order in which they start/stop, is completely customizable.  It will impersonate a domain account that has permissions to the respective servers, so the starting or stopping of the services on multiple servers can be performed quickly and remotely.  With command line parameters, it can even be automated.

Keep in mind, HypServicesManager is completely independent of Hyperion services.  Although it was developed to fill the need to manage Hyperion services, it can be used for any windows service as it uses the windows APIs.

I have used this application in multiple organizations and it is often used in a production environment.  With that said, this has not been tested on thousands of environments in every Windows OS.  If it doesn’t work for you, we certainly welcome feedback and will make every effort to fix any bugs that are discovered.

Details on the use of the application are available here.




Empower Users To Improve Calculation Times

As an Essbase user, you have more power to improve performance than you think.  How many times do you lock and send data through Excel, SmartView, or web forms, that include zeros?  How many times do you allocate data to a finite level out of convenience?  Understanding what this does to Essbase is critical to understanding how a user can negatively impact performance without adding any value to the analysis or the results the database produces.

I analyzed a planning database used in one of the largest financial institutions in the world.  Over 60% of the values entered were zero.  Another 20% of the values were less than 1 dollar.  By eliminating the zeros, the total calculation time of the planning application was under 20 minutes.  With the zeros, it was nearly 2 hours.

There are two reasons for this.  First, there is a different between empty and zero.  Empty consumes no space to store whereas a zero consumes the same space as 1 billion.  Think of this as a grocery bag.  If you fill a grocery bag with nothing, it takes up no space.  If you fill it with empty cans (a zero), it consumes the same amount of space as if those cans were full (1 billion).

The example below is very common.  Assume that a forecast needs to be done for the last 3 months of the year.  Frequently, a spreadsheet would hold zeros for the first 9 months.  18 cells have zero and 6 cells have a positive value.  That means that 75% of your data could be eliminated by not loading zeros.

The same load with #Missing is more effective.

I highly recommend reading the article explaining dense and sparse to understand what a block is and what it represents before you continue this article.

There is also another very significant factor in loading zeros.  Loading a zero that creates a block just to hold a value of zero can explode the size of the database, as well as the time it takes to consolidate and execute business rules.  The more blocks that have to be loaded and consolidated, the longer it takes to finish.  If each block was a spreadsheet and you had to do this manually, you would have to open each spreadsheet and enter the number into a calculator to consolidate.  If 75% of the blocks you opened were zero, it wouldn’t change your total, but it would drastically increase the time it takes because you still have to open each spreadsheet.   If an Essbase database has 1,000 blocks, and 75% of them only hold zeros, it will likely take 2 or 3 times longer to calc the zeros because it still has to open the block and add the zero.  Remember, a zero acts no differently than a value of 100.

As an example to the above, the following example would create a block for South and West, inflating the database size.

 

Users can significantly reduce this unnecessary explosion in size by loading a blank as apposed to a zero.  If zeros are already in the database, leaving the cell blank will NOT overwrite the zeros.  If zeros are loaded inadvertently, a #Missing has to be used to remove them.

For all you users loading data, it can be a hassle removing the zeros.  Being responsible can significantly improve your experience with Essbase.  To make it easier, take a look at the function in the In2Hyperion Excel Ribbon that replaces all zeros with #Missing.




Altering Large Numbers Of Cells In Excel A Hundred Times Quicker

Many processes need to write large volumes of data in Excel.  The typical method is to loop through each cell and perform the action.

   Dim CellsDown As Long
   CellsAcross As Long

   Dim CurrRow As Long
   CurrCol As Long
   Dim CurrVal As Long

   '  This can be replaced with the selected range and is just used to illustrate this example.
   CellsDown = 1000
   CellsAcross = 36

   '   Loop through cells and insert values
   CurrVal = 1
   Application.ScreenUpdating = False
   For CurrRow = 1 To CellsDown
       For CurrCol = 1 To CellsAcross
           Range("A1").Offset(CurrRow - 1, CurrCol - 1).Value = CurrVal
           CurrVal = CurrVal   1
       Next CurrCol
   Next CurrRow

Rather than writing the values out cell by cell, it is quicker to store the value in an array and write the array to a range of cells at one time.

   Dim CellsDown As Long
   CellsAcross As Long

   Dim CurrRow As Long
   CurrCol As Long
   Dim CurrVal As Long

   Dim TempArray() As Double

   '  This can be replaced with the selected range and is just used to illustrate this example.
   CellsDown = 1000
   CellsAcross = 36

   '   Update the array
   ReDim TempArray(1 To CellsDown, 1 To CellsAcross)
   Set TheRange = Range(Cells(1, 1), Cells(CellsDown, CellsAcross))

   '   Fill the temporary array
   CurrVal = 0
   Application.ScreenUpdating = False
   For i = 1 To CellsDown
       For j = 1 To CellsAcross
           TempArray(i, j) = CurrVal
           CurrVal = CurrVal   1
       Next j
   Next i

   '   Transfer temporary array to worksheet
   TheRange.Value = TempArray

This same method can be used when altering data.  By changing the following line

            TempArray(i, j) = CurrVal

To this

            TempArray(i, j) = TheRange(i, j) * 3

By using TheRange(i, j), the existing value can be altered

 

The process of writing values cell by cell took 3.16 seconds.  Using the array method, it took .08 seconds, nearly 40 times faster




Getting large numbers in FR Reports?

When using Workspace to view reports, some users have seen excessively large numbers that don’t belong. If you are having this issue, it could very well be because the default Essbase query engine in 11.1.1.x is the MDX query engine, which can cause documented bug 9062413. Essentially, this bug will cause users to see the same astronomical number in every cell that sits on an intersection to which the user does not have security access. Understandably, this can cause some concern. This issue is expected to be fixed in a future release, but until then, the query engine will need to be manually changed.
The first option is to fix the issue at a report level. This is a relatively quick process and is a good idea if you only have a handful of reports.  To change the query engine setting for a particular report, follow the steps below:

1. Open the report.
2. For each grid, select the entire grid.
3. Right-click and select Data Query Optimization Settings.
4. Deselect the option “Essbase Queries Use MDX.”
5. Save the report.

Repeat the above process for each report.

For users that have a larger number of reports, a better option may be to change the query engine in the properties file on the server.  The benefit to fixing the issue in the properties file is that changes only need to be made once, and all reports will reflect this change.
The file that needs to be edited is located on the Financial Reporting (app) server, typically on the path D:\<Hyperion Home>\products\biplus\lib where <Hyperion Home> represents the root location of the Hyperion install. The file that needs adjusted is the fr_global.properties file.
Open the properties file and add these lines:

# MDX Query Engine has been set as the default in Essbase 11.1.1.x. This can cause bug 9062413 
# which may cause unauthorized users to see a long series of numbers in each cell when running  
# reports. To solve this issue, the below line was added, which switches the query engine.
EssbaseUseMDX=false

Any line preceded by “#” is commented out. Therefore, these can say whatever you prefer, but should give anyone that views this file a good indication why this text is in the file.

Once the properties file has been updated, the following services must all be stopped in the following order, then started in the same order for the changes to take effect.

1. Hyperion Financial Reporting – Print Server
2. Hyperion Financial Reporting – Report Server
3. Hyperion Financial Reporting – Scheduler Server
4. Hyperion Financial Reporting – Web Application (Note – This service may be on the FR (Web) server, not the FR(App) server like the other three services.)

Note – This modification will apply to everyone using the server on which they are made, so be careful when making changes to a shared server.