1

Managing Smart View Shared Connections

If you use Smart View, you are familiar with the Smart View Shared Connection URL, which is unique to the environment that Smart View connects.  That property is saved in a file on your computer and has the default URL, as well as all the saves URLs in the drop-down list.

There are times users want this drop down cleaned up.  If the URL changes due to environment changes, or they enter the wrong URL and it is saved, the need to clean up what is in this setting can reduce confusion and user frustration.  Often, IT departments want to deploy these settings and need to understand how to make every user’s Smart View configuration the same.  Understanding where these settings are stored and how the file that stores them is configured will assistance with either of these requests.

The user’s computer will hold, or require a file if a new deployment of settings is the path IT takes, in the following folder.

<drive>:\Users\<username>\AppData\Roaming\Oracle\SmartView

The drive is almost always C, and the username is unique to every organization in the naming convention used.  If the user already has setup Smart View, a file named properties.xml exists.  This file can be edited in any text editor.  If you are familiar with XML files, you will see a typical XML structure.

The entire file is enclosed in a cfg tag, so it is opened with <cfg> and closed with </cfg>.  The current provider URL is within a provider tag.  All saves provider URLs are inside a previousURLList tag and separated by a pipe (|) delimiter.  So, the file is laid out like this.

<?xml version=”1.0″?>
<cfg>
<provider></provider>
<previousURLList>provider 1|provider 2|provider 3|…</previousURLList>
</cfg>

As a consultant, I have many providers saved, so the configuration.xml file on my system looks like a little busy.  The file will likely be smashed together and not easily readable.  Opened in a text editor:

<?xml version=”1.0″?><cfg><provider><overrideWorkspaceUrl>https://planning-test-a499161.pbcs.us6.oraclecloud.com/workspace/SmartViewProviders</overrideWorkspaceUrl><previousURLList>http://mp1epm01:19000/SmartViewProviders|http://mp1epm01:19000/workspace/|https://planning-a499161.pbcs.us6.oraclecloud.com/workspace/SmartViewProviders|https://planning-test-a499161.pbcs.us6.oraclecloud.com/workspace/SmartViewProviders|https://planning-test-a499161.pbcs.us2.oraclecloud.com/workspace/SmartViewProviders|https://planning-test-shiloh.pbcs.us2.oraclecloud.com/workspace/SmartViewProvidersx|http://mp1epm01.huronconsultinggroup.com:19000/workspace/SmartViewProviders|https://mp1epm01:19000/SmartViewProviders|https://mp1epm01.huronconsultinggroup.com:19000/aps/SmartView|https://mp1epm01.huronconsultinggroup.com:13080/SmartViewProviders</previousURLList><overrideUrl/></provider></cfg>

If this file is opened in an XML editor, or viewed in Internet Explorer, it will be more readable.

If old providers need to be removed, they can be deleted from this file and the user will see the change the next time Excel is started. If there is a need to distribute a pre-configured setup, build this file manually, or use an existing user’s file, and deploy it to all new users.




My Adventures in Groovy Calculations – Part 1

What Is Groovy

Recently, Groovy scripting was added to ePBCS business rules as an option instead of the GUI, or the go-to scripting for you old-timers who still refuse to change.  These are defined in the Business Rule editor as Groovy calculations.  So, what is Groovy?

“Apache Groovy is an object-oriented programming language for the Java platform. It is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as a scripting language for the Java Platform, is dynamically compiled to Java virtual machine (JVM) bytecode, and interoperates with other Java code and libraries. Groovy uses a Java-like curly-bracket syntax. Most Java code is also syntactically valid Groovy, although semantics may be different.”

If you haven’t heard of Groovy, you may want to do some research.  Oracle is using more and more Groovy in applications as administrative options and a communication method between applications.  Groovy is a standard and can be used with millions of applications and websites with the REST API.

What Groovy Script/Calculations Are Not

Groovy calculations are not java-based calculations.  It is not a new calculation language.  It does provide a way to interact with a Data Form in ePBCS and build a calculation script dynamically.  So, Groovy, in the context of Groovy Calculation Scripts, does not connect to Essbase via Groovy Business Rules. It simply builds a string that is sent to Essbase as a calculation.  It does, however, interact with Planning and that is where the power starts.  With the ability to have all the Groovy functionality to manipulate strings and now the ability to interact with the data form, dynamic calculations can be built.  The calculation script sent to Essbase is no different, but the script can now be dynamically generated based on things like, the POV, the text value of a Smart List, whether the values in the grid were updated, whether the data entered meets validation criteria, and other similar things.

If you are experienced with Hyperion Planning, you may have dabbled with JavaScript to do data validation, calculate data prior to the user submitting it, or prevent users from submitting data.  It was a great option to provide feedback to users, but that basically was useless when Smart View allowed users to open Data Forms in Excel. The JavaScript did nothing unless the form was opened in an internet browser.

Getting Started

The first step in creating a Groovy Calculation Script is to, well, create one.  To do that, create a new business ruleChange the view from Designer to Edit Script.  If you haven’t noticed this before, it provides a way to toggle the GUI to a script view.

Next, find the drop-down box in the toolbar to the far right named Script Type.  This option will read Calc Script.  Change it to Groovy Script.

AAt this point, the script window is now set to validate Groovy script, not Essbase syntax. Even though it doesn’t do anything yet, you have just created your first Groovy Business Rule!

Use Cases

There is a lot of potential in this functionality.  To get you thinking, here are some examples:

  1. Execute calculations on large sparse dimensions on ONLY the members that changed on the form.
  2. Access the Smart List text to do validation, use in calculations, and store for later use in Essbase (maybe save a member name in a member that is numeric, like employee ID, Cost Center, or account).
  3. Perform validation before the calculation is built and sent to Essbase. For example, if the sum of a column used to allocate dollars doesn’t sum to 100, send a calculation that ONLY returns a message and doesn’t perform the allocation.
  4. Perform text manipulation previously done in Essbase with functions.  Concatenating member names and truncating member name prefixes and date formats are some of the few I use regularly.  Many of these functions are extremely slow and force the calculation to execute in serial mode, so to be able to do them outside the script is now an option.

Real World Example

The Problem

I am working with a client who wants to override the result of driver-based calculations based on historical trends.  In this example, the volume of cases can be changed and the profit rate can be adjusted.  Once the form is saved, the overrides need to be removed.

Here-in lies the challenge.  If the overrides are removed and the calculation runs on all members in the form, the results would revert back to what they were prior to the override because the override values no longer exist in the database or Data Form.  So, rather than perform the calculation on the override, it would use #missing  or zero, and take the results right back to what the drivers dictated.  The most obvious way around this issue is to execute the calculation on ONLY the rows (vendors in this example) that were edited. In other words, dynamically generate the FIX statement on the vendors that were updated.

The Non-Groovy FIX Statement

Without Groovy, the FIX statement would include @RELATIVE(“Vendor”,0) to run the calculation on all venders on the Data Form.  This has 2 issues.  One, it calculates all the vendors and will change the vendors back to the pre-override values.  Two, every time the user saves the form, the fix is traversing through 30,000 possible vendors.  Although most companies have less than 8,000 active vendors, it still poses a performance issue calculating 8,000 blocks when only a few typically change.

The only aspect of the calculation that is going to change in this situation is the FIX statement, so that will be the only piece shown in the comparison between a Groovy script and a non-Groovy script.

FIX(&vScenario,
    &vVersion,
    &vCompany,
    &vYear,
    "Local",
    "Input",
    @RELATIVE(“Vendor”,0),
    "Jan":"Dec",
    "Regular_Cases")

The Groovy FIX Statement

Since Groovy can dynamically create the calculation script, it looks more like the example below. The sPov will be a string variable in Groovy that holds all the members in the data form’s POV.  The sVendors Groovy variable will hold the list of vendors that have been edited.

FIX($sPOV
    $sVendors,
    "Jan":"Dec",
    "Regular_Cases")

@RELATIVE(“Vendor”,0), which would produce a list of every vendor in the hierarchy, is replaced with “V300000300040003”, “V300000300060001”, “V300000300070002”.

The issue of running the calculation on vendors that have not been edited has now been solved.  An added benefit is that the calculation runs on 3 of the 8,000 blocks, so what took 30 seconds now completes in under a second.

Now, The Interesting Part

Let’s dissect the Groovy calculation script piece by piece.

Setting The Stage

For Groovy to perform operations, there are a few housekeeping items that need to be addressed.  First, a few string builders need to be created to store some variables of strings that grow through the process and are concatenated to Essbase calculation before it is submitted for processing.

There are some variables used to interact with the form’s data grid.  For easy reference to the grid through the script, the grid object is stored in a variable (curgrid).  Next, a variable is created to hold the result of the cells that have been edited (itr).    The likelihood that these variables would exist in most of the scripts is high, so it might make sense to get familiar with these objects and their parameters.

//Get current Data Form
DataGrid curgrid = operation.getGrid()

// Construct a string builder
// Holds the calculation script sent to Essbase
StringBuilder scriptBldr = StringBuilder.newInstance()

// Holds the value for the venders that have changed
StringBuilder vendorList = StringBuilder.newInstance()
String sVendors

// Iterater which gives you only the edited cells
GridIterator itr = curgrid.getDataCellIterator(PredicateUtils.invokerPredicate("isEdited"))

// Holds the list of members from the POV – the function returns an array, so this
// parsed the array and places quotes around each member and separates them with a comma
String sPov = '"' + curgrid.getPov().essbaseMbrName.join(',').replaceAll(',','","') + '"'

At this point the values of the variables are as follows.

Find the Vendors That Have Changed

We know the users will enter overrides in this Data Form (Case Growth and Average Price).  The following piece of the Groovy script will build a delimited list of those vendors based on the rows that have been edited.  It will include quotes around the member names to account for any member names that are numeric or have special characters and will be separated by a comma.  Groovy provides the ability to append to a string with <<”””, and close it with “””.  The if statement ensures that a vendor will not be appended to the string if multiple columns are changed.

// Loop through each cell that was edited and build the vendor list
// If multiple cells on the same row are edited, only add vendor once
itr.each{ DataCell cell ->
  sVendors = cell.getMemberName("Vendor")
  if(vendorList.indexOf(sVendors) < 0){
    vendorList <<"""
   ,"$sVendors"
   """
  }
}

At this point, only a few variables have changed. The bulk of the Groovy functionality is finished.  We now have the POV and the list of vendors that need to be in the FIX statement.

The Essbase Calculation

The next section will append text to the scriptBldr string.  This string will ultimately be sent to Essbase as the calculation to be performed.  Groovy variables are embedded and replaced with the value that they were set to previously.  The two used in this calculation are $vendorList and $sPOV.  Other than those two pieces, everything else is pulled from the original Business Rule and highlighted in red below.

// Add the calculation defined in a business rule to the string variable
// the POV and Vendor List will be used to dynamically set the FIX statement
scriptBldr <<"""
VAR v_Price;

FIX($sPOV
    $vendorList,
    "Jan":"Dec",
    "Regular_Cases")

  /* Calculate Overrides */
  "OEP_Working"(
  v_Price = "Avg_Price/Case"->"YearTotal";

  "Regular_Cases" = (1 + "Case_Growth_Rate"->"BegBalance") * 
                    ("Regular_Cases"->"FY16"->"Final");
  IF("Avg_Price/Case_Inp"->"BegBalance" == #Missing)
    "Net_Sales" = (v_Price) * (1 + "Case_Growth_Rate"->"BegBalance") * 
                  "Regular_Cases"->"FY16"->"Final";
  ELSE
    "Net_Sales" = ("Avg_Price/Case_Inp"->"BegBalance") * 
                  (1 + "Case_Growth_Rate"->"BegBalance") *
                  "Regular_Cases"->"FY16"->"Final" ;
  ENDIF

  IF("GP_2_%_Inp"->"BegBalance" == #Missing)
    "GP_Level_2" = ("GP_Level_2_%"->"YearTotal"->"FY16"->"Final") * "Net_Sales" ;
  ELSE
    "GP_Level_2" = ("GP_2_%_Inp"->"BegBalance") * "Net_Sales" ;
  ENDIF
  )
ENDFIX

FIX($sPOV
    $vendorList)

  CLEARDATA "Avg_Price/Case_Inp"->"BegBalance";
  CLEARDATA "GP_2_%_Inp"->"BegBalance";
  CLEARDATA "Case_Growth_Rate"->"BegBalance";
ENDFIX
"""

At this point, the scriptBldr variable is a complete Essbase calculation that can be validated in any Business Rule.

Finishing UP

The last thing required is to send the calculation text built above to Essbase.

println scriptBldr // Sends the script to the log
return scriptBldr // Sends the script to Essbase

Verifying What Was Sent To Essbase

When the Data Form is saved, the results in the form can be validated back to the logic to verify that the calculation worked as expected.  Regardless of whether the calculation executes with or without failing, the value of scriptBldr ( calculation sent to Essbase) is captured in the Job console.

In the Job console, click the Job Status link.  This includes the value of the scriptBldr variable.  The text can be copied from this window, and if it failed to execute, can be copied into a Business Rule and validated there to find the issue.

Wrapping Up

I will admit that I am not a Java programmer, so I am still educating myself on the potential this affords developers.  I am struggling to digest the API documentation and to truly understand the depth of the possibilities. I do know this opens up a whole world we didn’t have with Hyperion Planning. I plan on learning and using Groovy calculations more and more because of the possibilities it provides.  Look for more examples and knowledge sharing as I get my hands around the API and integrate this into more delivery solutions.  To get future publications, sign up to be notified about new posts and articles at www.in2hyperion.com.




Using a Shared Connection with HSGetValue/HSSetValue with Planning or PBCS

If you are a fan of the HSGetValue and HSSetValue, you probably are using a private connection. As you know, anybody that uses the template has to either change the connection string to their own predefined private connection, or set up a private connection with the same name. When dealing with inexperience users, both methods can be problematic.

You may be surprised to know that the Get and Set Value functions can use a shared connection. Rather than using the private connection name, the following can be specified to use a shared connection in place of the private connection name.

Private connection syntax:
HsGetValue(“PrivateConnectionName”,”POV”)
HsSetValue (dollar amount,”PrivateConnectionName”,”POV”)

Shared connection syntax:

HsGetValue(“WSFN|ProviderType|Server|Application|Database”,”POV”)
HsSetValue (dollar amount,”WSFN|ProviderType|Server|Application|Database”,”POV”)

Parameter Summary

  • “WSFN” is a static string and never changes
  • The provider type for planning is “HP” regardless of whether the server is a cloud server or on premise server
  • The server specifies the location of the server housing the application. For PBCS, use the URL provided by Oracle (planning-test-domain.pbcs.us2.oraclecloud.com)
  • The application is the application name
  • The database is the plan type, or database name

Put that all together and the string looks like this.
WSFN|HP|planning-test-A12345.pbcs.us2.oraclecloud.com|Finance|Revenue

Conclusion

Although there are a few drawbacks to using a shared connection (users could use the wrong connection and not get the expected result), my experience has been that the pros (no setup of private connections, can be used in multiple environments without changing anything, etc.) far outweigh the cons.




Remove Dimensions From Planning LCM Extracts

Problem

I am currently working with a client that is updating a planning application and one of the changes is to remove a dimension.  After the new application was setup and the hierarchies were modified to meet the objectives, migrating artifacts was the next step.  As many of you know, if you try to migrate web forms and composite forms, they will error during the migration due to the additional dimension in the LCM file.  It wouldn’t be a huge deal to edit a few XML files, but when there are hundreds of them, it is extremely time consuming (and boring, which is what drove me to create this solution).

Assumptions

To fully understand this article, a basic understanding of XML is recommended.  The example below assumes an LCM extract was run on a Planning application and it will be used to migrate the forms to the same application without a CustomerSegment dimension.  It is also assumed that the LCM extract has been downloaded and decompressed.

Solution

I have been learning and implementing PowerShell scripts for the last 6 months and am overwhelmed by how easy it is to complete complex tasks.  So, PowerShell was my choice to modify these XML files in bulk.

It would be great to write some long article on how smart this solution is and overwhelm you with my whit, but there is not much too it.  A few lines of PowerShell will loop through all the files and remove the XML tags related to a predefined dimension.  So, let’s get to it.

Step 1 – Understand The XML

There are two folders of files we will look to.  Forms are under the plan type and the composite forms are under the global artifacts.  Both of these are located inside the resource folder.  If there are composite forms that hold the dimension in question as a shared dimension, both will need to be impacted.  Scripts will be included to update both of these areas.

Inside each of the web form files will be a tag for each dimension, and it will vary in location based on whether the dimension is in the POV, page, column, or row.  In this particular example, the CustomerSegment dimension is in the POV section.  What we want to accomplish is removing the <dimension/> tag where the name attribute is equal to CustomerSegment.

For the composite forms, the XML tag is slightly different, although the concept is the same.  The tag in composite form XML files is <sharedDimension/> and the attribute is dimension, rather than name.

Step 2 – Breaking Down the PowerShell

The first piece of the script is just setting some environment variables so the script can be changed quickly so that it can be used wherever and whenever it is needed.  The first variable is the path of the Data Forms folder to be executed on.  The second is the dimension to be removed.

# Identify the source of the Data Forms folder and the dimension to be removed
# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |

The next piece of the script is recursing through the folder and storing the files in an array.  There is a where statement to exclude directories so the code only executes on files.

# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |
Step 3 – Removing The Unwanted Dimension

The last section of the script does most of the work.  This will loop through each file in the $files array and

  1. Opens the file
  2. Loops through all tags and deletes any <dimension/> tag with a name attribute with a value equal to the $dimName variable
  3. Saves the file
# Loop through the files and find an XML tag equal to the dimension to be removed
Foreach-Object {

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//dimension”) |
Where-Object {$_.name -eq $dimName} | ForEach-Object {
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
$xml.save($_.FullName)
Write-Host “($_.FullName) updated.”
}

Executing The Logic On Composite Forms

The above concepts are exactly the same to apply the same logic on composite forms files in the LCM.  If this is compared to the script applied to the web forms files, there are three differences.

  1. The node, or XML tag, that needs to be removed is called sharedDimension, not dimension. (highlighted in red)
  2. The attribute is not name in this instance, but is called dimension.  (highlighted in red)
  3. We have added a counter to identify whether the file has the dimension to be removed and only saves the file if it was altered.  (highlighted in green)
The Script
$lcmSourceDir = "Z:\Downloads\KG04\HP-SanPlan\resource\Global Artifacts\Composite Forms"
$dimName = "CustomerSegment"
# List all files
$files = Get-ChildItem $lcmSourceDir -Recurse | where {$_.Attributes -notmatch 'Directory'} |
# Remove CustomerSegment
Foreach-Object {
  # Reset a counter to 0 - used later when files is saved
  $fileCount = 0

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//sharedDimension“) | Where-Object {$_.dimension -eq $dimName}  | ForEach-Object {
#Increase the counter for each file that matches the criteria
    $fileCount++
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
# If the dimension was found in the file, save the updated contents.
  if($fileCount -ge 1) {
$xml.save($_.FullName)
Write-Host “$_.FullName updated.”
    }
}

Summary

The first script may need to be run on multiple plan types, but the results is an identical folder structure with altered files that have the identified dimension removed.  This can be zipped and uploaded to Shared Services and used to migrate the forms to the application that has the dimension removed.

The scripts above can be copied and pasted into PowerShell, or the code can be Downloaded.




Use PowerShell to split large files by month/year for data loads into FDMEE on PBCS

If you are using PBCS, you may run into some challenges with large files being passed through FDMEE.  Whether performance is an issue or you just want to parse a file my month/year, this script might save you some time.

The Challenge

I recently had the need to break apart a file.  The source provided one large text file that included 2 years of data that was needed to populate the history of an employee metrics application.  The current process loaded files by month and we wanted to be able to piggy back off the existing scripts to load and process data in FDMEE and the monthly Planning data pushes to the ASO reporting cube.  So, the need break the data file into seperate files by month and year was required.  The file was delimited and formatted like the following.

Entity,Year,Scenario,Period,Account,Date,Employee,Pay Code,JobNumber,Data
BU1005,FY15,Actual,Feb,Pay Amount,02/02/2015,V1398950,P105,,108.10
BU1005,FY15,Actual,Feb,Pay Amount,02/03/2015,V1398950,P105,,108.92

The goal was to have a file for every unique month and year combination that included only the lines of the relevant time periods.  The header of the file also had to exist in each of the smaller files.  Since we were working on a Windows machine, we used PowerShell to script the solution.

Powershell Script Directions

The script is pretty simple to use and understand.  Update the script as follows.

  1. Create a new text file with a ps1 extension and paste the following into that file.
  2. Update the srcFile variable to point to the file to be parsed.
  3. Update the startYear to the first year in the file to be extracted.
  4. Update the currentYear variable to the last year in the file to be extracted.
  5. Update the ProcessName to a meaningful word or phrase that will be used to create the file name.
  6. Save the file and execute it like any other PowerShell script.

This will produce 12 files for each year with the header line and the data related to the month and year that represents the year and month in the file name.

Disclaimer

I welcome feedback on improving performance and will give credit to anybody that can improve on this.  I am NOT an expert in PowerShell and I am sure there are faster ways to accomplish this.  This created 12 files (1 year / 12 months) from a file that includes 7.8 million records and completed in 24 minutes.  So, this is pretty reasonable for one-off requests, but might need attention if it was a repeatable need.

This was developed using PowerShell 5 and some functions do not work in earlier adoptions of the software.

Powershell Script

#######################################################################
# Set the file to parse
# 
# Set the start year and end year
# 
# Change the counter if you want the files produced to start at
# something other than 1
#######################################################################
# Write a status to the screen to monitor progress
write-host "Processing started at $($(Get-Date).ToShortTimeString())"

# Update to point to the source file
$srcFile = "C:\Oracle\GCA\Data\Files\2015 Time Data\Time_DataPayAmount2015.csv" 

# Set to the first year you want to process
$startYear = 2015 
# Set to the last year you want to process
$currentYear = 2016 

# Used in the naming, is the starting number in name and increments by 1
$counter = 1 

# Get the first line (the header line) of the file 
$Header =  Get-Content $srcFile -First 1 

# Set the process name used in the file name 
$ProcessName = "Test Process" 

# Loop through each year in the range 
ForEach ($Years in $startYear..$currentYear )
 {
   # Loop through each month of the year
   ForEach ($months in 1..12 )
   {
     # Get the 3 month abbreviation of the month being processed
     $ShortMonth = (Get-Culture).DateTimeFormat.GetAbbreviatedMonthName($months)

     # Format year to FYxx (This is typically required on a Planning application)
     $FormattedYear = "FY" + 
     $Years.ToString().substring($Years.ToString().length - 2, 2)

     # Set the file name to a number starting with 1, the Month, and the year
     # Example: 01_ProcessName_Jan_2015.txt
     $FileOut = "{0:00}" -f $counter++ + "_" + $ProcessName + "_" + 
     $ShortMonth + "_" + $Years + '.txt'

     # Write out the header to the newly created file file
     $Header | out-file -filepath $FileOut -Encoding utf8  

     # Write out all the lines that match the month and year. The pattern 
     # includes a ".*" which is the equivalent of an AND conjunction, so 
     # the line has to include the processing year AND processing month 
     # for it to be included.
     select-string $srcFile -pattern "${FormattedYear}.*($($ShortMonth))" | 
     foreach {$_.Line} | out-file -filepath $FileOut -Encoding utf8 -Append

     # Write a status to the screen - this is not required but provides a level
     # of the current progress by communicating the Month/Year completed and the
     # time it completed  
     write-host $fileout "Completed at $($(Get-Date).ToShortTimeString())"
   }
 }

Conclusion

Hopefully this will benefit the community.  As I create more scripts like this, I plan to share them.




PBCS Pro Tip: Manage Multiple Test Accounts with One Gmail Address

Working with Jake Turrell always benefits me in many ways.  Jake found a fantastic way to minimize the effort it takes to create test accounts for testing and training Planning users.  You no longer have to create new multiple accounts.

“During the testing phase of most Planning implementations, developers need to create test user accounts.  I typically create at least one test user for each security group so I can verify that the correct access has been assigned.  With an on-premises Hyperion Planning implementation, this is easy – simply create user ID’s in the Shared Services native directory.  With PBCS, creating bulk test ID’s can be difficult, as each user ID requires a unique e-mail address.  If you need 50 test users, should you create 50 fake/temporary e-mail accounts?  Luckily the answer is no.”

Check out how here.

About Jake

Jake Turrell is a Hyperion Architect and Oracle Ace Associate with over 20 years of experience implementing Enterprise Performance Management solutions. Jake’s technology career began in the early 90’s as a Financial Systems intern at Dell in Austin, Texas, administering IMRS Micro Control (the DOS-based predecessor to Hyperion Enterprise). After working at Dell, Jake joined Ernst & Young’s Management Consulting practice where he worked with a variety of technologies. He later returned to the Hyperion world and joined a boutique Hyperion consulting firm in Dallas, Texas.
Jake has spent the last 17 years implementing Hyperion Planning and Essbase solutions for a variety of clients across multiple industries. Certified in both Hyperion Planning and Essbase, Jake holds a BBA from the University of Texas at Austin.




PBCS Release 16.06 Overview

PBCS is about to release a major upgrade (1 of 2 every year scheduled). Oracle released a 29 page document laying out everything that should be expected. Want the abbreviated version?

 

  • Want your users to see the simplified user interface? You will be able to make it the default.
  • Welcome to EPBCS. This enhanced version will include modules for Financials, Workforce, Projects, and Capital.
  • Users can now create dashboards that include editable forms and ad hoc grids, and include new chart types.
  • Forms, task lists, and reports can be viewed in either list view or hierarchical view.
  • You can now use an attribute dimension as a dimension, as a filter in forms and reports, and within ad hoc grids. Using attribute dimensions enables administrators and end users to perform tasks such as:
    1. Filtering data using attribute members, such as by products with a certain color
    2. Performing cross-dimensional rollups across attribute members
    3. Reporting and analysis with attribute dimension members using Smart View, or financial reports
    4. Using attribute dimensions in dynamic user variables 
Attribute dimensions are optional and are listed separately on the Layout tab of the Form Designer. Drag the Attribute dimension to a Point of View or to a row or column to add it to the form grid.
  • Administrators can create aliases for artifacts similar to alias tables where things like forms can be viewed in native languages.
  • For new applications, administrators can optionally choose a simplified multicurrency option during application creation. Using simplified currency avoids the use of the Hsp_Rates dimension and adds a Currency dimension with exchange rates stored in the Account dimension
  • You can now create smart lists based on dimension hierarchies. This dynamically updates smart list values based on member updates.
  • Form grid display can be tied to the start and end period for the respective scenarios on display
  • A new action menu in the console allows customers to clear specific areas within both input and reporting cubes
  • Users can now drill on shared members to get to the children of the base member.
  • Form designers can now prevent the form save confirmation message from being displayed to users by specifying an option in form design.
  • The usability and readability of forms is increased with duplicate aliases. Aliases can now contain the same name within an alias table and across alias tables.
  • Import Metadata functionality is extended to Microsoft Word.
  • You can now quickly add attribute dimensions to an ad hoc grid at any time during the ad hoc session.
  • In the Planning Admin Extension, you can now work with attribute dimensions and the Time Period dimension. Just as with regular dimensions, you can use the Planning Admin Extension in the Smart View application to quickly import and edit attribute and time dimension application metadata.
  • System Templates are now displayed under New Objects.
  • You can now add a warning or an error to a step using validation conditions. Errors prevent the next step. Warnings allow the next step after you click OK on the warning message. You can use a design- time prompt or function on the validation condition. This allows you to use functions on design-time prompts without having to create non-promptable design-time prompts.
  • When you are debugging business rules, a Condition Builder is now available to help you build conditions.
  • You can use the Member Selector dialog box to create MDX syntax and validation before running a partial clear.
  • The following new design-time prompt types are available:
    1. Percent
    2. Integer
    3. StringAsNumber
    4. DateAsNumber
    5. Smart List
    6. UDA

New Design-Time Prompt Functions

  • @AVAILDIMCOUNT – Returns the number of available dimensions.
  • @DEPENDENCY – “Inclusive” returns member(s) from Input 1 for which Input 2 has member(s) specified from the same dimensions. “Exclusive” returns members from Input 1 for which Input 2 has no specified members in the same dimensions.
  • @DIMATTRIBUTE – Returns the attribute name if the specified attribute is associated with a dimension.
  • @DIMNAME – Returns the name of a dimension if it is valid for the database.
  • @DIMUDA – Returns the UDA name if the specified UDA is valid for the dimension.
  • @EVALUATE – Returns the result of an expression.
  • @FINDFIRST – Finds the first substring of a string that matches the given regular expression.
  • @FINDLAST – Finds the last substring of a string that matches the given regular expression.
  • @GETDATA – Returns the value of the slice.
  • @INTEGER – Returns an integer.
  • @ISDATAMISSING – Returns true if the value of the slice is missing.
  • @ISANDBOXED – Determines if the current application is sandboxed.
  • @ISVARIABLE – Determines if the argument is a variable.
  • @MATCHES – Returns “true” if the first substring of a string matches the given regular expression.
  • @MEMBERGENERATION – Returns the generation number of a member.
  • @MEMBERLEVEL – Returns the level number of a member.
  • @MSGFORMAT – Takes a set of objects, formats them, and then inserts the formatted strings into the pattern at the appropriate places.
  • @OPENDIMCOUNT – Returns the number of dimensions for which a member was not specified.
  • @VALUEDIMCOUNT – Returns the number of dimensions for which a member was specified.
  • @TOMDX – Returns an MDX expression.

New Design Time Prompt Types

  1. Percent
  2. Integer
  3. StringAsNumber
  4. DateAsNumber
  5. Smart List
  6. UDA

New Custom Defined Functions

  • @CalcMgrBitAnd – Performs a bitwise AND operation, which compares each bit of the first operand to the corresponding bit of the second operand. If both bits are 1, the corresponding result bit is set to 1; otherwise, the corresponding result bit is set to 0.
  • @CalcMgrBitOR – Performs a bitwise OR operation, which compares each bit of the first operand to the corresponding bit of the second operand. If either bit is 1, the corresponding result bit is set to 1; otherwise, the corresponding result bit is set to 0.
  • @CalcMgrBitExOR – Performs an exclusive bitwise OR operation, which compares each bit of the first operand to the corresponding bit of the second operand. If either bit is 1, the corresponding result bit is set to 1; otherwise, the corresponding result bit is set to 0.
  • @CalcMgrBitExBoolOR – Performs an exclusive boolean bitwise OR operation.
  • @CalcMgrBitCompliment – Performs a unary bitwise complement, which reverses each bit.
  • @CalcMgrBitShiftLeft – Performs a signed left shift.
  • @CalcMgrBitShiftRight – Performs a signed right shift.
  • @CalcMgrBitUnsignedShiftRight – Performs an unsigned right shift.
  • @CalcMgrCounterClearAll – Removes all keys and values from the counter
  • @CalcMgrCounterClearKey – Removes the value from the counter associated with the key
  • @CalcMgrCounterDecrement – Decrements the value in the counter based on the key. If the key is not found, a value of zero is set for the key
  • @CalcMgrCounterDecrementKey – Decrements the value in the counter based on the key. If the key is not found, a value of zero is set for the key
  • @CalcMgrCounterGetKeyNumber – Returns the text found in the counter based on the key. If the key is not found, missing value is returned.
  • @CalcMgrCounterGetKeyText – Returns the text found in the counter based on the key. If the key is not found, missing value is returned.
  • @CalcMgrCounterGetNumber – Returns the number from the counter specified by the key. If the key is not found or the value is not a number, missing value is returned.
  • @CalcMgrCounterGetText – Returns the text found in the counter based on the key. If the key is not found, missing value is returned.
  • @CalcMgrCounterIncrement – Increment the value in the counter specified by the key
  • @CalcMgrCounterIncrementKey – Increments the value in the counter based on the key. If the key is not found, a value of zero is set for the key.
  • @CalcMgrExcelToDate – Converts an Excel date to YYYYMMDD format.
  • @CalcMgrExcelToDateTime – Converts an Excel date to YYYYMMDDHHMMSS format.
  • @CalcMgrGetStringFormattedDateTime – Converts the date defined by format to date in the YYYYMMddHHmmss format.
  • @CalcMgrDateToExcel – Converts a date in YYYYMMDD format to an Excel date
  • @CalcMgrDateTimeToExcel – Converts a date in YYYYMMDDHHMMSS format to an Excel date
  • @CalcMgrRollDay – Roll the day up or down to the date which is in the YYYYMMDD format
  • @CalcMgrRollDate – Adds or subtracts (up or down) a single unit of time on the given date field without changing larger fields.
  • Possible values of date_part are: day, month, week and year.
  • @CalcMgrRollMonth – Roll the month up or down to the date which is in the YYYYMMDD format.
  • @CalcMgrRollYear – Roll the year up or down to the date which is in the YYYYMMDD format.
  • @CalcMgrExcelACCRINT – Returns the accrued interest for a security that pays periodic interest
  • @CalcMgrExcelACCRINTM – Returns the accrued interest for a security that pays interest at maturity
  • @CalcMgrExcelAMORDEGRC – Returns the depreciation for each accounting period by using a depreciation coefficient
  • @CalcMgrExcelAMORLINC – Returns the depreciation for each accounting period
  • @CalcMgrExcelCOUPDAYBS – Returns the number of days from the beginning of the coupon period to the settlement date
@CalcMgrExcelCOUPDAYS – Returns the number of days in the coupon period that contains the settlement date
  • @CalcMgrExcelCOUPDAYSNC – Returns the number of days from the settlement date to the next coupon date
  • @CalcMgrExcelCOUPNCD – Returns a number that represents the next coupon date after the settlement date
  • @CalcMgrExcelCOUPNUM – Returns the number of coupons payable between the settlement date and maturity date, rounded up to the nearest whole coupon
  • @CalcMgrExcelCOUPPCD – Returns a number that represents the previous coupon date before the settlement date
  • @CalcMgrExcelCUMIPMT – Returns the cumulative interest paid on a loan between start_period and end_period
@CalcMgrExcelCUMPRINC – Returns the cumulative principal paid on a loan between the start period and the end period
  • @CalcMgrExcelDB – Returns the depreciation of an asset for a specified period using the fixed-declining balance method
  • @CalcMgrExcelDDB – Returns the depreciation of an asset for a specified period using the double- declining balance method or some other method you specify
  • @CalcMgrExcelDISC – Returns the discount rate for a security
  • @CalcMgrExcelDOLLARDE – Converts a dollar price expressed as an integer part and a fraction part, such as 1.02, into a dollar price expressed as a decimal number. Fractional dollar numbers are sometimes used for security prices.
  • @CalcMgrExcelDOLLARFR – Converts a dollar price, expressed as a decimal number, into a dollar price, expressed as a fraction
@CalcMgrExcelDURATION – Returns the annual duration of a security with periodic interest payments
  • @CalcMgrExcelEFFECT – Returns the effective annual interest rate
  • @CalcMgrExcelFV – Returns the future value of an investment
  • @CalcMgrExcelFVSCHEDULE – Returns the future value of an initial principal after applying a series of compound interest rates
  • @CalcMgrExcelINTRATE – Returns the interest rate for a fully invested security @CalcMgrExcelIPMT – Returns the interest payment for a given period for an investment based on periodic, constant payments and a constant interest rate
  • @CalcMgrExcelIRR – Returns the internal rate of return for a series of cash flows
  • @CalcMgrExcelISPMT – Calculates the interest paid during a specific period of an investment
  • @CalcMgrExcelMDURATION – Returns the Macauley modified duration for a security with an assumed par value of $100
  • @CalcMgrExcelMIRR – Returns the internal rate of return where positive and negative cash flows are financed at different rates
  • @CalcMgrExcelNOMINAL – Returns the annual nominal interest rate
  • @CalcMgrExcelNPER – Returns the number of periods for an investment
  • @CalcMgrExcelNPV – Returns the net present value of an investment based on a series of periodic cash flows and a discount rate
  • @CalcMgrExcelPMT – Returns the periodic payment for an annuity
  • @CalcMgrExcelPPMT – Returns the payment on the principal for a given period for an investment based on periodic, constant payments and a constant interest rate
  • @CalcMgrExcelPRICE – Returns the price per $100 face value of a security that pays periodic interest
  • @CalcMgrExcelPRICEDISC – Returns the price per $100 face value of a discounted security
  • @CalcMgrExcelPRICEMAT – Returns the price per $100 face value of a security that pays interest at maturity
  • @CalcMgrExcelPV – Returns the present value of an investment
@CalcMgrExcelRATE – Returns the interest rate per period of an annuity
  • @CalcMgrExcelRECEIVED – Returns the amount received at maturity for a fully invested security
  • @CalcMgrExcelSLN – Returns the straight-line depreciation of an asset for one period
  • @CalcMgrExcelSYD – Returns the sum-of-years’ digits depreciation of an asset for a specified period
  • @CalcMgrExcelTBILLEQ – Returns the bond-equivalent yield for a Treasury bill
  • @CalcMgrExcelTBILLPRICE – Returns the price per $100 face value for a Treasury bill
  • @CalcMgrExcelTBILLYIELD – Returns the yield for a Treasury bill
  • @CalcMgrExcelXIRR – Returns the internal rate of return for a schedule of cash flows that is not necessarily periodic
  • @CalcMgrExcelXNPV – Returns the net present value for a schedule of cash flows that is not necessarily periodic
  • @CalcMgrExcelYIELD – Returns the yield on a security that pays periodic interest
  • @CalcMgrExcelYIELDDISC – Returns the annual yield for a discounted security; for example, a Treasury bill
  • @CalcMgrExcelYIELDMAT – Returns the annual yield of a security that pays interest at maturity
  • @CalcMgrExcelCEILING – Rounds a number up (away from zero) to the nearest integer or to the nearest multiple of significance
  • @CalcMgrExcelCOMBIN – Returns the number of combinations for a given number of objects
  • @CalcMgrExcelEVEN – Rounds a number up to the nearest even integer
  • @CalcMgrExcelFACT – Returns the factorial of a number
  • @CalcMgrExcelFACTDOUBLE – Returns the double factorial of a number
  • @CalcMgrExcelFLOOR – Rounds a number down, toward zero
  • @CalcMgrExcelGCD – Returns the greatest common divisor
  • @CalcMgrExcelLCM – Returns the least common multiple
  • @CalcMgrExcelMROUND – Rounds a number to a specified number of digits
  • @CalcMgrExcelMULTINOMIAL – Returns the multi-nominal of a set of numbers
  • @CalcMgrExcelODD – Rounds a number up to the nearest odd integer
  • @CalcMgrExcelPOWER – Returns the result of a number raised to a power
  • @CalcMgrExcelPRODUCT – Multiplies its arguments
  • @CalcMgrExcelROUNDDOWN – Rounds a number down, towards zero
  • @CalcMgrExcelROUNDUP – Rounds a number up, away from zero
  • @CalcMgrExcelSQRT – Returns a positive square root
  • @CalcMgrExcelSQRTPI – Returns the square root of (number * pi)
  • @CalcMgrExcelSUMSQ – Returns the sum of the squares of the arguments
  • @CalcMgrExcelSUMPRODUCT – Returns the sum of the products of corresponding array components
  • @CalcMgrExcelAVEDEV – Returns the average of the absolute deviations of data points from their mean
  • @CalcMgrExcelDEVSQ – Returns the sum of squares of deviations
  • @CalcMgrExcelLARGE – Returns the nth highest number
  • @CalcMgrExcelMEDIAN – Returns the median of the given numbers
  • @CalcMgrExcelSMALL – Returns the nth smallest number
  • @CalcMgrExcelSTDEV- Estimates standard deviation based on a sample
  • @CalcMgrExcelVAR – Estimates variance based on a sample
  • @CalcMgrExcelVARP – Estimates variance based on the entire population
  • @CalcMgrFindFirst – Find the first substring of this string that matches the given regular expression.
  • @CalcMgrFindLast – Find the last substring of this string that matches the given regular expression.
  • @CalcMgrMatches – Returns true, if the first substring of this string that matches the given regular expression. For regular expression, see ”java.util.regex.Pattern” in the Java docs.
  • @CalcMgrMessageFormat – Creates a string with the given pattern and uses it to format the given arguments.
  • @CalcMgrStartsWith – Tests if this string starts with the specified prefix.

Security

  • While the overall access rights granted to a user are controlled by the assigned identity domain role, Service Administrators can use the Access Control feature from the Console to assign additional application-level access by provisioning users and Native Directory groups with application-specific roles. For instance, a Planner in the service can now be assigned the Approvals Administrator role to enable the user to perform approvals-related activities.



Map Smart List to Dimension

As you all notice PBCS does not yet offer the possibility to create attribute dimensions. One of the solutions to get around this is to choose to map a smart list to a dimension in the map reporting module. Follow the steps below to configure your mapping.

Let’s assume that you already have 2 plan types, one BSO and one ASO. In the example below, my plan type names are Finance for the BSO and rFinance for the ASO. In the project where I implemented this solution, I only needed 2 different attributes (Department and Service type) and I didn’t need any customized dimensions.

  • First you need to create the ASO dimensions representing the desired attributes. In my example I have two different attributes: Department and Service Type. On each of these dimensions create the members representing the value of your attribute. Please note that the members’ name must match the value of the smart list consequently, they need to follow the naming convention of the smart list (no spaces or special characters). You will also notice that for each of the dimensions I have created a default member (No_***), I will explain why I did this a bit later on in the post.
  • On the second step we will create the smart list. To do that click on Administration -> Manage -> Smart lists. Then create the new smart lists by clicking on add.

    For now just create the smart lists, we will add value in other steps.
  • Now, we will create the BSO members linked to the smart list. Go to your BSO outline and on the account dimension create 4 members’ names, Department Property, Department Property Input, Service Type Property and Service Type Property Input. Link these members to the correct smart list.
  • Once this step in finished go to refresh the app by clicking on Administration -> Application -> Refresh Database.
  • Now that all members and smart lists are created, let’s create the map reporting application. To do that click on Administration -> Map Reporting Application then click on add.

    On the first tab choose your source (Finance) and reporting (rFinance) application, and don’t forget to add a name to the map reporting.
    On the second tab for the department and service type row choose the Smart List to Dimension mapping type, and then choose the appropriate smart list. On the member section choose Department Property and Service Type Property members and click on save.
    Now you should see on the screen the application mapping that you have just created. Select it and refresh it by clicking on refresh
  • Now we will go back to the smart lists that we created in the second step. Since we have linked these smart lists to a dimension, we can now update them automatically. This can help you save a lot of time if you have smart lists with 50 members or more.
    To do this, go on the smart list screen (see step 2). Once you are there, select the smart list and click on synchronize.

    Once this is done, the message below should appear.
  • In order to make the smart list to dimension work, we need to assign an attribute value to every intersection for which we have data. For easy maintenance and to avoid business rules that are running for too long, we will do this by attaching a formula to a member (Department Property and Service Type Property). Go to your outline and edit these members by making them dynamic and by adding the formula that you see in the screen shot below (this formula might need to be updated depending on the name and number of dimensions that you have in your application). You will see in the formula that I’m defaulting the value of the attribute to No_Department, this is because you can’t leave any intersection #missing it will lead the map reporting to an error.
  • The final step is to create the form where the user can set up the attribute of their choice for every entity.
    Go to the form panel to create your form, in the POV section put the member that you put in the above formula. In the row put the level 0 entity and in the column put the member Department Property Input and Service Type Property Input.
    You can now push your data using the previous map reporting application.
Pro: This solution is very easy to maintain for the admin, and if needed you can also let a user play with the attributes.
Con: If you have too many attributes, this can make the size of the application get too big and consequently slower.



Essbase Security: Setting Filters to Groups

For most Essbase applications, user and group security will be a necessity. Here are the steps to set up individual filters and then apply them to a group in Shared Services.

First, create a security filter in Essbase:

Then click on “New” and add read/write access for the filter:

Here is an example of the member specification for filter access:

Next, click Verify and then Save at the bottom of the page.

The next step is to login to Shared Services and create a new group:

The group name should match the filter name to reduce opportunities for confusion. While creating the group, add group/user members:

Next, the group will need to be provisioned for access to the desired application:

For Read/Write access only, assign “Filter” to the group:

For access to run calc scripts on the application along with Read/Write access, assign “Calc” to the group:

The next step is the part that has always been the trickiest piece for me. Right click on the application under Application Groups and select Access Control:

Search for the desired group and move it to the selection window on the right:

Select the desired group and then use the filter & calc dropdowns to select the required filters and/or calc scripts to assign to the group:

Click save after the desired access control for the group has been set. Remember, calc’s can only be assigned if the group was given “Calc” provisioning for the application.

Now the security filter has been successfully assigned to a group in Shared Services.




One at a time, please

Introduction

One of the problems with giving users of Hyperion Planning the ability to run calculations is opening up the possibility for all of them to run the same calculation at the same time.  This can cause a range of issues, from slower performance, to calculations never finishing due to locked blocks, to crashing the server.

Prior to Planning, I created VB applications to monitor what was calculated to make sure multiple calculations were not executed at the same time.  Initiating a calculation through a web portal allowed us to notify the user that the calculation request was ignored because a calculation was already running.

Both Essbase and Planning have come a long way since the 90s.  With the introduction of the @RETURN function, developers can interact with users and create a break in a calculation (business rule) so it doesn’t proceed.  The message is still reactive, but with some creativity, there are some really awesome things you can achieve.  Controlling what calculations are executed simultaneously is one of those things.

The Goal

Assume an application has a global consolidation calculation that is required to be executed for reporting requirements.  Since the administrators don’t want to be bothered at all hours of day and night, they want to enable the users to run the calculation and ensure it isn’t run more than one time during the calculation window.

This assumes the 6 required dimensions in Planning, plus a Department dimension.

The Method

 

Make a predefined placeholder where an indicator can be saved – a 1 or a 2.  When the calculation is executed, the value will be set to a 1.  When the calculation is finished, the value will be set to 2.  When the calculation is initiated, it will check that value.  If it is a 2, the calculation will execute.  If it is a 1, it assumes a calculations is already running so it will abort and notify the users.  This ensures that the calculation will never run twice at the same time.

Note:  I prefer the use of 1 and 2 over 0 and 1.  Many times a process is implemented to eliminate zeros and restructure the application periodically.  Not using a zero can eliminate errors in some situations.

Example

FIX("No Entity","No_Dept","No Account","Budget","FY15","BegBalance")    SET CREATEBLOCKONEQ ON;    "Working"(      /* Check to see if a calculation is running         If the flat is a 1, return a message and stop the calculation         If the flag is a 2, continue */      IF("Working" == 1)        @RETURN ("This calculation is already running.  Please come back at a
                       later time and try again.", ERROR);      ELSE        "Working" = 1;      ENDIF)    SET CREATEBLOCKONEQ OFF;  ENDFIX     /* Aggregate the database */  FIX("Working","Budget","FY15")    AGG("Entity","Department");  ENDFIX     /* Set the flag back to 2 */  FIX("No Entity","No_Dept","No Account","Budget","FY15","BegBalance")    "Working" = 2;   ENDFIX

Summary

This method could be used in a variety of situations, not just a global calculation.  If this inspires you to use the @RETURN in other ways, please share them with the In2Hyperion and we can make your solution available to everybody.