1

Bug Report: GroovyDynamicLocalizedException Validation Error

The GroovyDynamicLocalizedException Groovy method should return the message in the user selected local (language) from a messageBundle.  This can be used to share the bundles in other situations.  Currently, when used, it returns a validation error.

A validation error was received from the Planning server.

‘Error:The Groovy script failed to compile with internal error: Compile Error: [Static type checking] – Cannot find matching method Script29#GroovyDynamicLocalizedException(oracle.epm.api.GroovyResourceBundleLoader, java.lang.String). Please check if the declared type is right and if the method exists.

Oracle has created in internal bug to this issue.

Bug 27656951 – EPBCS – GROOVY FUNCTION ERRORING

An example of the use of this function that produces this error is below.

def mbUs = messageBundle(["validation.InvalidVendorDesc":"The Product name cannot exceed 20 characters in length. The description that you entered is characters too long.", "validation.InvalidCharacter":"You cannot use commas [ , ] or pipes [ | ] in Product names"])
def mbl = messageBundleLoader(["en" : mbUs])
println GroovyDynamicLocalizedException(mbl,"validation.InvalidVendorDesc").getMessage()

There is currently no time table on a resolution.




Adventures In Groovy – Part 11: Accessing Metadata Properties

Introduction

Groovy opens up a lot of things above and beyond performance improvements and improving the user experience.  One example is the possibility to interact with the metadata.  Dimensions and members can be queried for all types of things which can be useful in many situations.  Is the POV at a level 0?  What is the parent of the current POV member?  Does the member exist in another application?  What about pushing data for specific UDAs and dynamically generating the Data Map?  How about dynamically generating the Data Map to ignore dynamic calculated members?  These are just some examples to get you thinking about where this could be useful.

Code Example

This article won’t get into the logic to accomplish the above examples once the property is identified but will explain how to extract properties for its use.  Below is an example of retrieving every property of an account named Regular_Cases.  This iterates through every metadata property and writes it to the log.

// Get the dimension of the member in question
Dimension AccountDim = operation.application.getDimension("Account")
// Get the member 
Member AccountMbr = AccountDim.getMember("Regular_Cases")
// Print the map to the log
println AccountMbr.toMap()
def memberProps = AccountMbr.toMap()
// Print the member name
println AccountMbr.toString()
// Print every property and corresponding property value
for ( e in memberProps ) {
  println "${e.key} = ${e.value}"
}

When this is executed, the following is sent to the log.

println AccountMbr.toMap() produces

{Formula (rGP)=<none>, Plan Type (GP)=true, Solve Order (rGP)=0, Formula (Fin)=<none>, Data Storage (OEP_WFSC)=never share, Time Balance=flow, Formula=<none>, UDA=HSP_NOLINK, Skip Value=none, Variance Reporting=non-expense, Data Storage (GP)=never share, Essbase Name=Regular_Cases, UUID=c842d186-6d83-4b90-8d1e-49474a6a8a1d, Member=Regular_Cases, Data Storage=never share, Data Storage (rFin)=never share, Formula (rFin)=<none>, Aggregation (rWFP)=+, Formula (GP)=<none>, Data Storage (rWFP)=never share, Data Storage (OEP_REP)=never share, Data Storage (rGP)=never share, Data Type=currency, Formula (OEP_WFP)=<none>, Plan Type (rFin)=true, Aggregation (OEP_WFP)=+, Data Storage (OEP_WFP)=never share, Parent=GP_Accts, Two Pass Calculation=false, Aggregation (GP)=+, Plan Type (rGP)=true, Process Management Enabled=true, Plan Type (rWFP)=false, Source Plan Type=GP, Aggregation (OEP_WFSC)=+, Exchange Rate Type=none, Plan Type (Fin)=true, Alias: English=Regular Cases, Plan Type (OEP_WFP)=false, Aggregation (OEP_REP)=+, Solve Order (rWFP)=0, Data Storage (Fin)=never share, Hierarchy Type=dynamic, Allow Upper Level Entity Input=false, Account Type=revenue, Formula (OEP_REP)=<none>, Aggregation (Fin)=+, Aggregation (rGP)=+, Plan Type (OEP_WFSC)=false, Formula (rWFP)=<none>, Formula Description=<none>, Aggregation (rFin)=+, Solve Order (rFin)=0, Formula (OEP_WFSC)=<none>, Solve Order (OEP_REP)=0, Valid For Consolidations=false, Plan Type (OEP_REP)=false}

for ( e in memberProps ) {println “${e.key} = ${e.value}”} produces

Regular_Cases
Formula (rGP) = <none>
Plan Type (GP) = true
Solve Order (rGP) = 0
Formula (Fin) = <none>
Data Storage (OEP_WFSC) = never share
Time Balance = flow
Formula = <none>
UDA = HSP_NOLINK
Skip Value = none
Variance Reporting = non-expense
Data Storage (GP) = never share
Essbase Name = Regular_Cases
UUID = c842d186-6d83-4b90-8d1e-49474a6a8a1d
Member = Regular_Cases
Data Storage = never share
Data Storage (rFin) = never share
Formula (rFin) = <none>
Aggregation (rWFP) = +
Formula (GP) = <none>
Data Storage (rWFP) = never share
Data Storage (OEP_REP) = never share
Data Storage (rGP) = never share
Data Type = currency
Formula (OEP_WFP) = <none>
Plan Type (rFin) = true
Aggregation (OEP_WFP) = +
Data Storage (OEP_WFP) = never share
Parent = GP_Accts
Two Pass Calculation = false
Aggregation (GP) = +
Plan Type (rGP) = true
Process Management Enabled = true
Plan Type (rWFP) = false
Source Plan Type = GP
Aggregation (OEP_WFSC) = +
Exchange Rate Type = none
Plan Type (Fin) = true
Alias: English = Regular Cases
Plan Type (OEP_WFP) = false
Aggregation (OEP_REP) = +
Solve Order (rWFP) = 0
Data Storage (Fin) = never share
Hierarchy Type = dynamic
Allow Upper Level Entity Input = false
Account Type = revenue
Formula (OEP_REP) = <none>
Aggregation (Fin) = +
Aggregation (rGP) = +
Plan Type (OEP_WFSC) = false
Formula (rWFP) = <none>
Formula Description = <none>
Aggregation (rFin) = +
Solve Order (rFin) = 0
Formula (OEP_WFSC) = <none>
Solve Order (OEP_REP) = 0
Valid For Consolidations = false
Plan Type (OEP_REP) = false
Data Storage (GP) = never share

Getting A Specific Property

Typically, there would not be a need to pull every property.  There might be times when having access to these, however, is useful in calculations.  If a currency calculation is being executed, for example, the rate applied is different if the member is a balance sheet account.  Getting one value can be retrieved by building on the above script.

def keyProp = "Account Type"
if(memberProps[keyProp] = "Revenue"
  {do something}
elseif(memberProps[keyProp] = "Balance Sheet"
  {do something}

Wrap Up

This may seem a little worthless at first, but if you think about all the BSO functions (getting UDAs, Account types for VAR functions, and member relation functions) that require this information, mimicking them in Groovy requires access to the metadata properties.  So, don’t underestimate its use for things like variance, currency, and other calculations, that are done outside of Essbase/Planning calculations and member formulas.




Adventures in Groovy – Part 10: Validating Form Data

Introduction

One of the huge frustrations I have with Planning is the fact that you haven’t been able to stop a user from saving data that didn’t validate since Smart View was released and Data Forms could be opened in Excel.  Prior to that, the forms could be customized with JavaScript and the form save could be interrupted and cells flagged.  Well, thanks to Groovy Calculations, it is back, and every bit as flexible.

Example

The following Data Form allows users to add new products and enter financial information.  In this form, 3 rules exist.

  1. The GP Level 2 % has to be between -10% and 30%.
  2. The Regular Cases CANNOT have one month that is more than 50% of the total cases.
  3. If Regular Cases is entered, a corresponding Average Price per Case is required.

When a user fills out the form and violates these rules, the cell background is changed and a tool tip is added.  If violations exists, the form does NOT save any changes to the database.  Before any changes can be committed, all errors have to be corrected.  In this example, all 3 validation rules are violated and noted.  If the user hovers the mouse over the cell with a violation, the tool tip is displayed with the row and column members, and an error message explains to the user what the issue is with the data that is entered.

The Code

The significance of this is huge, but the implementation is rather simple.  It is probably be one of the more basic things created with a Groovy Calculation.  Quite simply, to add a validation error and stop the form from saving, all that has to be done is to add a validation error to the cell.

cell.addValidationError(0xFF0000, “Customer Error Message“,false)

This method accepts 3 parameters.

  1. The first is the color you want the background to change to.  This is the integer value of any color.  Most people are familiar with the RGB code, and this can be retrieved in almost any image editor (even Windows Paint).  There are many free options, like the free option at https://www.shodor.org/ to convert that to a value that can be interpreted in Groovy.
  2. The second parameter is the error message to be displayed in the tool tip.
  3. The third is optional, and defaults to false.  False that it will identify the cell as an error and stop the form from saving.

This will likely becused in a grid iterator, which is how this example was constructed to get the screenshot above.  If the grid iterator object is foreign to  you, read Adventures in Groovy – Part 3: Acting On Edited Cells.  The one function that is void from that article is the crossDimCell method.  This acts like a cross dim (->) in a calculation.  So, it references the POV of the cell and overrides the dimension of the member specified as a parameter.  If multiple differences exist, separate the names with a comma.

def BackErrColor = 16755370 // Light Red
//Loop through the cells on the Regular Cases row
operation.grid.dataCellIterator('Regular_Cases','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec').each { 
  // Set a variable equal to all 12 months
  def CaseTotal = it.crossDimCell('Jan').data + it.crossDimCell('Feb').data + it.crossDimCell('Mar').data + it.crossDimCell('Apr').data + it.crossDimCell('May').data + it.crossDimCell('Jun').data + it.crossDimCell('Jul').data + it.crossDimCell('Aug').data + it.crossDimCell('Sep').data + it.crossDimCell('Oct').data + it.crossDimCell('Nov').data + it.crossDimCell('Dec').data 
  // Check the cell value to see if it is larger than 50% of the total year
  if(it.data / CaseTotal > 0.5 ) 
    it.addValidationError(BackErrColor, "Cases for a single month can't be more than 50% of the total year cases.", false) 
  // If cases are entered, make sure there is a corresponding price
  if(it.data != 0 && (it.crossDimCell("Avg_Price/Case_Inp").data == 0 || it.crossDimCell("Avg_Price/Case_Inp").data == '#Missing'))
    it.crossDimCell("Avg_Price/Case_Inp").addValidationError(BackErrColor, "A price is required when cases are entered.", false) 
}

// Loop throught the GP input cells and validate the % is in the valid range
operation.grid.dataCellIterator('GP_2_%_Inp','Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec').each { 
  println "$it.MemberNames $it.data"
  if(it.data > 0.3 || it.data < -0.1 ) {
    it.addValidationError(BackErrColor, "GP2 has to be between -10% and 30%.", false) 
 }
}

Form Settings

The one gotcha is that this needs to run BEFORE SAVE.  It makes sense, but I was expecting a similar result as validating a RTP when the Business Rule runs on save, so it took me a good night sleep to recognize that error in judgement.

Why This Is Significant

You may not thing this is a big deal because you can check this in a Business Rule after the data is saved and return an error message requesting the user to change it.  However, the users are as busy, if not more busy, than you are.  There are last minute changes that get slammed in at the end of a forecast or budget cycle.  There is no design doc to go back to and say it is going to take longer and we need a change order.  The CFO won’t accept that as an answer, so things get forgotten or missed.  This example forces valid data (not necessarily accurate) to be entered, and all kinds of things can be checked to make sure human errors don’t turn into huge issues for financial reporting.  Think if you had a product and forgot to put a price.  You could be missing millions, and this type of proactive validation can prevent such things from happening.  Little things like this reduce, or eliminate, fire drills later on in the budget cycle.

Conclusion

There is an infinite number of things that can be accomplished.  Simple things like the above, to extremely complex validation can be added.  Think about ensuring allocated dollars are fully allocated(100%), forcing salaries to be in predefined pay bands for titles, and forcing the results of driver based planning to be within a logical margin.

If you have some examples, please share with the community by posting a comment below.




Groovy, Game Changing Technology at Breakthru Beverage Group – Webinar Invitation

Join Us On March 22, 2018

On March 22, 2018, I am hosting a webinar featuring the work delivered at Breakthru Beverage Group in Chicago, Illinois.  Breakthru had the traditional challenges, but it had some additional obstacles other don’t.  With requirements of entering budget at any level, complex allocation and seeding logic, and the need for consolidated reporting in real time, we had to get creative.  Welcome, Groovy Calculations!  Groovy calculations were released in June of 2017, just in time to be a key resource to solve the previously stated problems.   This application will highlight solutions like changing a product price at a consolidated level, and seeing it allocated down to delivery channel, material group, and company.  It will show how we consolidated and pushed to results to the P&L applications in seconds.

It is easy to participate, you can RSVP TODAY.

I will discuss the architecture, the challenges, and how we used Groovy to do things never before possible in Hyperion Planning.

Although we will be discussing the technology used, this is not a technical discussion on how to write Groovy calculations.  If you are an administrator, user, or owner of PBCS, we will highlight challenges you are likely facing, and how to overcome them using Groovy at a more functional level.  If you are looking to purchase or move to the cloud, this presentation will educate you on the possibilities now available with the new functionality of Groovy calculations.

Agenda

  • Introduction: Setting the expectations and introducing the speakers
  • Application Overview: Application, purpose, top down planning, and seeding
  • Performance Challenges: Product updates, allocations, long wait times for consolidated reporting
  • Real Time Reporting: How Groovy allowed us to overcome performance issues and enable real time consolidated reporting
  • The Groovy 411: Live demo showing how Groovy Calculations solved performance issues
  • More Than Performance: Live demo showing other enhancements Groovy provides, like user input validation
  • Finishing Up: Q/A, review, and opportunities for next steps to setup an optimization assessment

The Official Invitation

Top Down and Bottom Up Planning at Breakthru Beverage Group

Planners are always looking for real time reporting and faster feedback.  They are looking to make the forecasting and planning process faster by using historical trends and the ability to enter data at any level, enter growth factors, and drive the results down to the lowest level of the business.  They want instant feedback on consolidated results.

Join this webcast and hear from the VP of Financial Planning & Analysis at Breakthru Beverage Group on how they are using Oracle Planning and Budgeting Cloud Service (PBCS) integrated with game changing technology, Groovy, to improve speed and performance across planning processes.

Leave this session with an understanding on how Breakthru Beverage:

  1. Attained strategic benefits of building a driver based budget and forecasting application with the ability to seed product level data and apply growth rates consolidated levels to effectively build a bottoms up plan.
  2. Leveraged work force planning to include the ability to allocate people over multiple cost centers and companies.
  3. Developed a technical architecture and strategy to allow this to happen and integrate with the higher level P&L in real time.

RSVP today and learn how you can take advantage of Groovy.




Exporting Data in PBCS With Business Rules

Introduction

If your environment is a cloud product, whether it be PBCS or ePBCS, one thing that is critical to understand is the backups produced in the Migration area, may not be what you think.  Learning this after the fact may have negative consequences on your ability to restore data.  In the migration, the Essbase Data section is a copy of the pag, ind, and otl files.  When this is used to restore data, it restored the entire database.  This includes data and metadata.  This may be OK for many situation, but it won’t help you if

  • only specific data is required to be restored
  • specific data has changed and needs to be excluded from the restore
  • corruption exists in the database and all data is required to be restored
  • The pag files that hold the data are not readable
  • The size of the backup is quite large as it includes all data, and upper level data is normally exponentially larger than just level 0 data

Text Data Export

Business Rules can be written to export data to the Inbox/Outbox that is delimited with a few formatting options.  The entire database can be included.  With fix statements, specific data can be isolated.  So, forecast could be exported to a file, plan another, and actuals a third.  Specific accounts, entities, and/or products can be isolated in cases when specific data was inadvertently changed or deleted.  This file is a text file that can be opened in any text editor, Microsoft Excel, a database, or any other application that you open text files to view or manipulate.

Example Business Rule

/* Set the export options */
 SET DATAEXPORTOPTIONS
 {
 DataExportLevel LEVEL0;
 DataExportDynamicCalc OFF;
 DataExportNonExistingBlocks OFF;
 DataExportDecimal 4;
 DataExportPrecision 16;
 DataExportColFormat ON;
 DataExportColHeader Period;
 DataExportDimHeader ON;
 DataExportRelationalFile ON;
 DataExportOverwriteFile ON;
 DataExportDryRun OFF;
 };
 
 FIX(@Relative("Account", 0),
     @Relative("Years", 0),
     @Relative("Scenario", 0),
     @Relative("Version", 0),
     @Relative("Entity", 0),
     @Relative("Period", 0), 
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0),
     @Relative("custom_dim_name_1", 0))

  DATAEXPORT "File" "," "/u03/lcm/filename_xyz.txt" "";

 ENDFIX

Some Hints

There are a few things that you may encounter and be a little confused about, so the following are a few things that might help.

  1. To see the data export, it must be exported to /u03/lcm/, which is the equivalent of your inbox.  Any file name can be used.
  2. Setting DataExportLevel to 0 will export the level 0 blocks, not the level 0 members.  If there are any stored members in any of your dense dimensions, they will be exported unless the dimension is also in the fix to include ONLY level 0 members.
  3. The fix statement works the same as a fix statement in any business rule, so the data to be exported can be easily defined.
  4. My experience exporting dynamic calculated members drastically increases the time of the export.
  5. The export options are all pretty logical.  Some work in conjunction with each other and others are ignored depending on dependent setting values.  These are documented for version 11.1.2.4 here.
  6. This process can be automated with EPM Automate and include the download and time stamp of the backup for later use.

Conclusion

There are benefits to both types of backups.  My preference is to either run both nightly, or run just the Business Rule.  By having both, the administrator has the option of restoring the data as needed, in the way that is most effective.  Having both provides the ultimate flexibility.  If space is an issue, exclude the data option in the Migration and just run the business rule.

 

From Oracle’s Documentation

DataExportLevel ALL | LEVEL0 | INPUT

  • ALL—(Default) All data, including consolidation and calculation results.
  • LEVEL0—Data from level 0 data blocks only (blocks containing only level 0 sparse member combinations).
  • INPUT—Input blocks only (blocks containing data from a previous data load or grid client data-update operation). This option excludes dynamically calculated data. See also the DataExportDynamicCalc option.

In specifying the value for the DataExportLevel option, use these guidelines:

  • The values are case-insensitive. For example, you can specify LEVEL0 or level0.
  • Enclosing the value in quotation marks is optional. For example, you can specify LEVEL0 or “LEVEL0”.
  • If the value is not specified, Essbase uses the default value of ALL.
  • If the value is incorrectly expressed (for example, LEVEL 0 or LEVEL2), Essbase uses the default value of ALL.

Description

Specifies the amount of data to export.

DataExportDynamicCalc ON | OFF

  • ON—(Default) Dynamically calculated values are included in the export.
  • OFF—No dynamically calculated values are included in the report.

Description

Specifies whether a text data export excludes dynamically calculated data.

Notes:

  • Text data exports only. If DataExportDynamicCalc ON is encountered with a binary export (DATAEXPORT BINFILE …) it is ignored. No dynamically calculated data is exported.
  • The DataExportDynamicCalc option does not apply to attribute values.
  • If DataExportLevel INPUT is also specified and the FIX statement range includes sparse Dynamic Calc members, the FIX statement is ignored.

DataExportNonExistingBlocks ON | OFF

  • ON—Data from all possible data blocks, including all combinations in sparse dimensions, are exported.
  • OFF—(Default) Only data from existing data blocks is exported.

Description

Specifies whether to export data from all possible data blocks. For large outlines with a large number of members in sparse dimensions, the number of potential data blocks can be very high. Exporting Dynamic Calc members from all possible blocks can significantly impact performance.

DataExportPrecision n

n (Optional; default 16)—A value that specifies the number of positions in exported numeric data. If n < 0, 16-position precision is used.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on precision (accuracy). Depending on the size of a data value and number of decimal positions, some numeric fields may be written in exponential format; for example, 678123e+008. You may consider using DataExportPrecision for export files intended as backup or when data ranges from very large to very small values. The output files typically are smaller and data values more accurate. For output data to be read by people or some external programs, you may consider specifying the DataExportDecimal option instead.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • The DataExportDecimal option has precedence over the DataExportPrecision option.

Example

SET DATAEXPORTOPTIONS
    { 
    DataExportPrecision 6;
    DataExportLevel ALL;
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce","100-10","New York"
"Jan","Actual",6.78123e+008,2.71123e+008,4.07e+008,9.41235e+017,5.11235e+007,0,9.41235e+017,-9.41235e+017,2.10112e+009,6.44123e+008,2.06712e+009,60.0186,-1.388e+011,-7.84362e+016
"Feb","Actual",645123,258123,387000,9.01235e+006,5.11235e+006,1.12346e+008,1.2647e+008,-1.26083e+008,2.06712e+009,6.19123e+007,2.04112e+007,59.9886,-19544.1,-1.05069e+007
"Mar","Actual",675,270,405,94,51,1,146,259,2041,742,2108,60,38.3704,21.5833

DataExportDecimal n

Where n is a value between 0 and 16.

If no value is provided, the number of decimal positions of the data to be exported is used, up to 16 positions, or a value determined by the DataExportPrecision option if that is specified.

Description

Specifies that the DATAEXPORT calculation command will output numeric data with emphasis on legibility; output data is in straight text format. Regardless of the number of decimal positions in the data, the specified number is output. It is possible the data can lose accuracy, particularly if the data ranges from very large values to very small values, above and below the decimal point.

Notes:

  • By default, Essbase supports 16 positions for numeric data, including decimal positions.
  • If both the DataExportDecimal option and the DataExportPrecision option are specified, the DataExportPrecision option is ignored.

Example

SET DATAEXPORTOPTIONS
    {DataExportDecimal 4;
    DataExportLevel "ALL";
    DataExportColHeader "Measures";
    DataExportDynamicCalc ON;
    };
    DATAEXPORT "File" "," "output1.out";

Initial Data Load Values

"Sales" "COGS" "Margin" "Marketing" "Payroll" "Misc" "Total Expenses" "Profit" "Opening Inventory" "Additions" "Ending Inventory" "Margin %" "Profit %" 
"100-10" "New York"
"Jan" "Actual" 678123456.0 271123456.0 407123456.0 941234567890123456.0 51123456.0 0 145123456.0 262123456.0 2101123456.0 644123456.0 2067123456.0 60123456.029 38123456.6430
"Feb" "Actual" 645123 258123 3871234 9012345 5112345 112345678 14212345 24512345 2067123456 61912345 20411234 601234 37123456.98 
"Mar" "Actual" 675 270 405 94 51 1 146 259 2041 742 2108 60 38.37037037037037 
"Qtr1" "Actual" 1998 799 1199 278 153 2 433 766 2101 2005 2108 60.01001001001001 38.33833833833834

Exported Data Format

"Sales","COGS","Margin","Marketing","Payroll","Misc","Total Expenses","Profit","Opening Inventory","Additions","Ending Inventory","Margin %","Profit %","Profit per Ounce"
"100-10","New York"
"Jan","Actual",678123456.0000,271123456.0000,407000000.0000,941234567890123520.0000,51123456.0000,0.0000,941234567941246980.0000,-941234567534246910.0000,2101123456.0000,644123456.0000,2067123456.0000,60.0186,-138799883591.4395,-78436213961187248.0000
"Feb","Actual",645123.0000,258123.0000,387000.0000,9012345.0000,5112345.0000,112345678.0000,126470368.0000,-126083368.0000,2067123456.0000,61912345.0000,20411234.0000,59.9886,-19544.0820,-10506947.3333
"Mar","Actual",675.0000,270.0000,405.0000,94.0000,51.0000,1.0000,146.0000,259.0000,2041.0000,742.0000,2108.0000,60.0000,38.3704,21.5833

Output Format Options

DataExportColFormat ON | OFF

  • ON—The data is output in columnar format.
  • OFF—Default. The data is output in non-columnar format.

Description

Specifies if data is output in columnar format. Columnar format displays a member name from every dimension; names can be repeated from row to row, enabling use by applications other than Essbase tools. In non-columnar format, sparse members identifying a data block are included only once for the block. Non-columnar export files are smaller, enabling faster loading to an Essbase database.

Notes

Do not use the DataExportColFormat option in combination with the DataExportRelationalFile option, which already assumes columnar format for files destined as input files to relational databases.

Example

SET DATAEXPORTOPTIONS
 {
 DATAEXPORTCOLFORMAT ON;
 };
 FIX("100-10", Sales, COGS, Jan, Feb, Mar, Actual, Budget)
 DATAEXPORT "File" "," "d:\temp\test2.txt" ;
ENDFIX;

DataExportColHeader dimensionName

Description

Specifies the name of the dense dimension that is the column header (the focus) around which other data is referenced in the export file. Use the DataExportColHeader option only when you export data to a text file. For example, if from Sample Basic the Year dimension is specified, the output data starts with data associated with the first member of the Year dimension: Year. After all data for Year is output, it continues with the second member: Qtr1, and so on.

Notes

MaxL, ESSCMD, and Essbase exports do not provide a similar capability. With these methods, Essbase determines the focal point of the output data.

Exporting through Report Writer enables you to specify the header in the report script.

Example

SET DATAEXPORTOPTIONS {DATAEXPORTCOLHEADER Scenario;};

Specifies Scenario as the page header in the export file. The Scenario dimension contains three members: Scenario, Actual, and Budget. All Scenario data is shown first, followed by all Actual data, then all Budget data.

DataExportDimHeader ON | OFF

  • ON—The header record is included.
  • OFF—Default. The header record is not included.

Description

Use the DataExportDimHeader option to insert the optional header record at the beginning of the export data file. The header record contains all dimension names in the order as they are used in the file. Specifying this command always writes the data in “column format”.

Example

SET DATAEXPORTOPTIONS 
 {
 DATAEXPORTLEVEL "ALL"; 
 DATAEXPORTDIMHEADER ON; 
 };
FIX("100-10", "New York", "Actual")
 DATAEXPORT "File" "," "E:\temp\2222.txt" ;
ENDFIX;

Specifying the DataExporttDimHeader ON option while exporting Sample Basic writes the data in column format, with common members repeated in each row. The data begins with a dimension header, as shown in the first two rows of the example file below:

"Product","Market","Year","Scenario","Measures"
"Sales","COGS","Marketing","Payroll","Misc","Opening Inventory","Additions","Ending Inventory"
"100-10","New York","Jan","Actual",678,271,94,51,0,2101,644,2067
"100-10","New York","Feb","Actual",645,258,90,51,1,2067,619,2041
"100-10","New York","Mar","Actual",675,270,94,51,1,2041,742,2108
"100-10","New York","Apr","Actual",712,284,99,53,0,2108,854,2250
"100-10","New York","May","Actual",756,302,105,53,1,2250,982,2476
"100-10","New York","Jun","Actual",890,356,124,53,0,2476,1068,2654
"100-10","New York","Jul","Actual",912,364,127,51,0,2654,875,2617
"100-10","New York","Aug","Actual",910,364,127,51,0,2617,873,2580
"100-10","New York","Sep","Actual",790,316,110,51,1,2580,758,2548
"100-10","New York","Oct","Actual",650,260,91,51,1,2548,682,2580
"100-10","New York","Nov","Actual",623,249,87,51,0,2580,685,2642
"100-10","New York","Dec","Actual",699,279,97,51,1,2642,671,2614

DataExportRelationalFile ON | OFF

  • ON—The output text export file is formatted for import to a relational database.
    • Data is in column format; sparse member names are repeated. (The DataExportColFormat option is ignored.)
    • The first record in the export file is data; no column heading or dimension header is included, even if specified. (The DataExportColHeader and DataExportDimHeader options are ignored.)
    • Missing and invalid data is skipped, resulting in consecutive delimiters (commas) in the output. The optional “missing_char” parameter for DATAEXPORT is ignored
  • OFF—Default. The data is not explicitly formatted for use as input to a relational database.

Description

Using the DataExportRelationalFile option with DATAEXPORT enables you to format the text export file to be used directly as an input file for a relational database.

Example

SET DATAEXPORTOPTIONS {
 DataExportLevel "ALL";
 DataExportRelationalFile ON;
};

FIX (Jan)
 DATAEXPORT "File" "," c:\monthly\jan.txt
ENDFIX;

Processing Options

DataExportOverwriteFile ON | OFF

  • ON—The existing file with the same name and location is replaced.
  • OFF—Default. If a file with the same name and location already exists, no file is output.

Description

Manages whether an existing file with the same name and location is replaced.

DataExportDryRun ON | OFF

  • ON—DATAEXPORT and associated commands are run, without exporting data.
  • OFF—Default. Data is exported

Description

Enables running the calculation script data export commands to see information about the coded export, without exporting the data. When the DataExportDryRun option value is ON, the following information is written to the output file specified in the DATAEXPORT command:

  • Summary of data export settings
  • Info, Warning, and Error messages
  • Exact number of blocks to be exported
  • Estimated time, excluding I/O time.

Notes

  • The DataExportDryRun option does not work with exports to relational databases.
  • If you modify the script for reuse for the actual export, besides removing the DataExportDryRun option from the script you may want to change the name of the export file.

Example

SET DATAEXPORTOPTIONS 
 {
 DataExportLevel "ALL";
 DataExportColHeader "Measures";
 DataExportColFormat ON;
 DataExportDimHeader ON;
 DataExportDynamicCalc OFF;
 DataExportDecimal 0;
 DataExportDryRun ON;
 DataExportOverwriteFile ON;
 };

FIX("Qtr1")
 DATAEXPORT "File" "," "E:\temp\log.txt" ;
ENDFIX;



Adventures in Groovy – Part 9: Ohio Valley OAUG Presentation

I was lucky enough to be invited to talk about the new Groovy Calculation in PBCS and ePBCS at the Ohio Valley OAUG meeting today.  If you have read the Groovy series, you know how strongly I feel about the advancements in Hyperion Planning with the addition of Groovy Calculations.  I want to share the presentation with a wider audience.  This is a functional overview for those who are new to the concepts.  This also introduces readers how to develop their first Groovy Calculation, and provides some examples.




Adventures in Groovy – Part 8: Customizing Data Maps and Smart Pushes

Introduction

Groovy has the ability to execute Data Maps and Smart Pushes.  Data Maps are objects that define the data movement between applications and are accessible globally.  Smart Pushes are Data Maps that are attached to a Data Form and can override the dimensions settings.  They are largely the same thing, but defined separately inside of PBCS.

So, why execute these in Groovy rather than assign to a form and call it a day?

  1. The data push can be customized to push only data that has changed, improving performance.
  2. When a form is saved, all calculations are executed first, and the Smart pushes are executed after all calculations are finished.  If Groovy is used and the data push is done inside a Business Rule, the order of operation can be a
    1. calculation
    2. data push
    3. calculation
    4. data push
    5. etc
  3. Since the Smart Push has a global limit of memory that can be used, and multiple people are running them through form saves, it is critical to make it as small as possible to reduce the probability of hitting that limit and increasing the odds of the Smart Pushes failing.

To date, I see the advantage of running a Smart Push (a Data Map on a form) in that most of the dimensions are already overridden and less overrides are required in the Groovy logic.  There is no difference in performance or the size of what can be pushed between the two when executed from a Groovy calculation.  The advantage of using a generic Data Map is that there is less effort in defining the form level Smart Pushes, and once one Groovy calculation is written to override all dimensions from a Data Map, it can be replicated easily to all required forms.

To understand the memory issues and explanation of how it differs from Data Maps and Smart Pushes, see PBCS Data Map / Smart Push Has Data volume Limits.

Data Map

Executing a Data Map is very simple and can be done in one line of code.

operation.application.getDataMap("Data Map Name").execute(true)

Calling execute() on a DataMap would execute the named Data Map (with no customization) and clearing the target location before pushing data.  Changing the true to false, or removing it, would remove the clear and leave the data as is.

To override the the dimension settings and increase or decrease the scope of what is used, the member names need to be further defined.  Every dimension’s scope can be changed, or just the ones that require a change in scope can be altered.  The following changes the scope for the account, scenario, version, and Entity dimensions.

operation.application.getDataMap("Data Map Name").execute(["Account":"Net Income, Income, Expense", "Scenario":"Budget", "Version":"Working", "Entity":"East Region"], true)

Smart Push

The Smart Push works exactly the same, except the object referenced is a Smart Push, and is obtained through the grid object, not the application.  Again, the likelihood that the Smart Push is further scoped is high, so it is reasonable that the dimensional parameters would be fewer as many of them change based on the POV selected.

operation.grid.getSmartPush("Smart Push Name").execute(["Account":"Min Bonus, Min Salary"])

One additional option is to define a Smart Push from a Data Map in the Groovy script.

operation.application.getDataMap("Data Map Name").createSmartPush().execute(["Account":"Min Bonus, Min Salary"])

Error Trapping

When these are written, it is likely that the Smart Pushes and Data Maps exist.  One extra step to ensure the calculation doesn’t fail is to verify their existence.  For Smart Pushes, verify that it is attached to the form.

//Data Map Example
if(operation.application.hasDataMap("Data Map Name"))
  operation.application.getDataMap("Data Map Name").execute(true)
//Smart Push Example
if(operation.grid.hasSmartPush("Smart Push Name"))
  operation.grid.getSmartPush("Smart Push Name").execute()

Conclusion

Creating variables to use in these functions to change the scope to only the rows and columns that have been edited, the calculation would look like this.  This is where I see the biggest bang for  your buck.  To understand more about using the grid iterator, read Adventures in Groovy Part 3: Acting On Edited Cells.  When a grid has hundreds of rows, only pushing the data that has been edited can make a huge difference in performance.

// Setup the variables to store the list of edited vendors and periods
def lstVendors = []
def lstPeriods = []
// Iterate through only the cells that were changed and create a list of changed vendors and periods
operation.grid.dataCellIterator({DataCell cell -> cell.edited}).each{ 
 lstVendors.add(it.getMemberName("Vendor"))
 lstPeriods.add(it.getMemberName("Period"))
}
// Convert the lists to a comma delimited string with quotes surrounding the members
String strVendors = """\"${lstVendors.unique().join('","')}\""""
String strPeriods = """\"${lstPeriods.unique().join('","')}\""""
// Execute the Smart Push with a change in scope to Vendor and Period
if(operation.grid.hasSmartPush("GP_SmartPush") && lstVendors)
 operation.grid.getSmartPush("GP_SmartPush").execute(["Vendor":strVendors,"Period":strPeriods]

With any luck, you will see significant improvement, unless they change every month of every row!




Updating EPM Automate Just Got Easier

Introduction

One of the challenges with EPM Automate has been eliminated this month.  Although it was a minor issue, the need to update EPM Automate regularly was something that had to be considered monthly.  Administrators of PBCS do not always have access to the on-premise footprint, like a Windows VM, that runs the automation.  Even more frequently, access to the production VM is only available to IT staff, so updating that environment is more strict, and has to be scheduled.  That schedule doesn’t always sync up to the changes in PBCS.

Update Command

As of the 02.18 release, a new command is available.  The “update” command will automatically download, and silently install, the newest version of the EPM Automate utility.  Once logged in, execute the following command.

epmautomate upgrade

If you are a frequent visitor, you know I am a fan of PowerShell.  All the automation I do with EPM Automate in the Windows environment utilizes this free scripting tool.  This command has been added to all my new projects so there is no manual effort in keeping the utility current.  This also eliminates any issues that pop up due to incompatibility issues with PBCS.

In my reusable scripts, this new function has been added.

function EPMA_Upgrade{
  $CmdLine = "upgrade"
  $ReturnCode = Start-Process "$EPMAutomate_Path\epmautomate.bat" $CmdLine -Wait -passthru -WindowStylee $ShowDosWindow
  LogResult "EPM Automate has been updated" $ReturnCode.ExitCode
}

During a nightly process, the function is referenced.  If the request fails, the administrators are emails.

$ReturnResult = EPMA_Upgrade
if( $ReturnResult -eq 1){Send_Email_Error;Exit}

Summary

This is a welcome addition.  Now, administrators and application owners don’t have to worry about using new features or keeping EPM Automate in sync with the active version of PBCS.  As the great, Forrest Gump would say – “One Less Thing.”




Supercharge PBCS with PowerShell

Last year I presented an in-depth overview on PowerShell and how it can be utilized in the Hyperion environment.  I have been asked many times to share it.  The presentation is a technical presentation and is meant to provide a strong introductory level foundation for anybody that wants to start using PowerShell to automate repetitive tasks.  I have built a large library of shared functions that can be used to automate PBCS and ePBCS, and I plan to share pieces of this in future posts.

For now, anybody that is interested in learning PowerShell, or has used it and doesn’t know why some things work and others don’t, this might prove to be a valuable resource.

 




Ohio Valley OAUG – Getting Groovy in Louisville

I have been selected to speak at the OVOAUG on February 16, 2018.  I have been there before, and it is a very nice group of people to engage with.  If you are in the area, or would like to hear more about how Groovy in PBCS can change the landscape of performance, user interaction, improvement of data input, and reduced user frustration, please go to http://ohio.communities.oaug.org/ and register.  I would love to see as many of you there as possible.

Here is the agenda.  It is going to be a functional overview, but we will touch on how to start writing Groovy, and if you show up, I will be more than happy to talk before and/or after the session.