1

Upgrade Or Downgrade To Or From Hybrid In The Cloud

There are benefits to moving to Hybrid, but there are also some challenges.  The content of this post is not around the pros and cons, but the fact that you can upgrade your environment to use it.  If you find it isn’t for you, you can “downgrade” back to BSO.  The flexibility provides everybody the ability to try it.



Recreate Introduction

EPMAutomate comes with a function that allows the ability to restore an environment to a clean slate.  I don’t think this is new to anybody that has used EPMAutomate or EPM Cloud Planning.  What might be a surprise is that it does more than just reset an environment so you can start over.  It can also:

  1. Change the type of Essbase database to Hybrid or a standard BSO.
  2. Temporarily convert a Planning, Enterprise Planning, Tax Reporting, or Financial Consolidation and Close environment to an Account Reconciliation, Oracle Enterprise Data Management Cloud, or Profitability and Cost Management environment.

Using Recreate

The usage of the Recreate command is as follows, which all options.

 epmautomate recreate [-f] [removeAll=true|false] [EssbaseChange=Upgrade|Downgrade] [TempServiceType=Service_type]
  • -f forces the re-create process to start without user confirmation. If you do not use the -f option, EPM Automate prompts you to confirm your action. Be careful using this option.  If you have a long day and aren’t focused, this can make the day a whole lot worse!
  • removeAll, removes all of the existing snapshots, as well as the content of the inbox and outbox.  The default is false, meaning it retains the snapshots and the content of inbox and outbox and nothing is removed.
  • EssbaseChange upgrades or downgrades the current Essbase version in legacy Oracle Financial Consolidation and Close Cloud, Oracle Enterprise Planning and Budgeting Cloud or Planning and Budgeting Cloud Plus 1 environments.
  • TempServiceType temporarily converts an environment to a different service environment.

Changing Your Essbase Version

To change your environment to BSO from Hybrid

epmautomate recreate EssbaseChange downgrade

To change your environment to Hybrid from BSO

epmautomate recreate EssbaseChange upgrade

Trying A Different Service

There are some details that must be understood to use this option and is dependent on the version of the cloud service you have.  For subscriptions other than EPM Standard Cloud Service and EPM Enterprise Cloud Service, meaning PBCS and EPBCS, you can use this option to convert, temporarily, to

  • Account Reconciliation
  • Oracle Enterprise Data Management Cloud
  • Profitability and Cost Management environment

To use this option to convert your environment to something it wasn’t originally intended for:

epmautomate recreate -f removeAll=true TempServiceType=ARCS

To change your environment back to its original service:

epmautomate recreate

For EPM Standard Cloud Service and EPM Enterprise Cloud Service subscriptions, you can use this option to convert to any supported EPM Cloud service.  EPM Enterprise Cloud Service subscriptions use a common EPM Cloud platform. Initially, you can deploy any supported EPM Cloud business process. 

To switch from a deployed business process to another, you must re-create the environment to delete the current deployment and to bring it back to the original EPM Cloud platform. You then re-create it again as the new service type.

For example, if you created an Account Reconciliation business process but now want to create an Oracle Enterprise Data Management Cloud environment, you must run the re-create command twice.

First, reset the service.

epmautomate recreate -f removeAll=true

Second, change the service type.

epmautomate recreate -f TempServiceType=EDMCS

The acceptable service types, currenty, are

  • ARCS (Account Reconciliation)
  • EDMCS (Oracle Enterprise Data Management Cloud)
  • EPRCS (Narrative Reporting)
  • PCMCS (Profitability and Cost Management)

That’s A Wrap

It is great that Oracle allows us to do these things. We have a ton of flexibility, not normally afforded to us in the cloud, to test and use different core database types.  It also allows those using the old SKU to try the new services, or business processes, to see if they might be something you want to purchase. 

If you want to give Hybrid a try, use your test environment and give it a spin.  If you want to get exposed to one of the other business processes, you now have the ability to see it without jumping through hoops.




EPMAutomate And Apple – Overcoming Installation Woes

I love my Mac and I am getting really close to not needing my Windows VM. I don’t think Smartview for Office 365 is going to be a replacement for the Windows version in my lifetime, but that is the only thing really making me keep my VM current.

Installation Woes

First, I an not a UNIX guy.  I love some of the functionality in terminal, especially manipulating files.  But I have tried to get EPMAutomate installed and working for a year and just about given up.  Every few months I try again and fail.  For all you Unix/Linux people, I am going to embarrass myself.  For everybody else having the same challenges, I think this will get you over the hump.

Choose Your Poison

You probably know you can run commands in terminal and you probably assume Bash is the default scripting language.  As of Catalina, Apple is using Z shell as the default.  You can change this if you want.  I am not an expert but everything I read, people really like Z shell and prefer it over Bash.

But if you want to change it or see what your default is, open up your system preferences and go to Users and Groups.  Here is a trick.  If you hold Control and click on your profile name, an option for Advanced Options appears.  Click that and you will see the Login Option is probably set to /bin/zsh, which is Z shell.  You can change this to /bin/bash if you want to use bash.

It is important to know the above because you will want to set some environment variable defaults, which I will get to shortly, and you have to know the default to update the appropriate file.

Installing EPMAutomate

First, download EPMAutomate through the normal way to download the utilities.  For cloud users, click on your use name when you log in and you will have a download options  Download the Linux/Mac version.  It doesn’t matter where you download it, but if you download it to your Home directory (the parent of downloads, documents, etc), it is easier to install in that location.  This is my preferred area because my path to the tool is shorter and it is separated from my other content.  The download is a zip file so unzip it to your preferred location.  There will be an epmautomate folder.  Inside that folder will be a bin folder, and the shell command is inside that.

You are finished with the easy part.

Configuring EPMAutomate

This is where I struggled and gave up a number of times.  Thank goodness I finally figured it out because I was really tired of using my VM when I needed to run quick commands to do basic things.  There are a few things that have to be done to get it to work effectively.  First, for me, my java home directory was not set and I edited the epmautomate.sh to include it.  Every time I updated the version, I had to go back in and do it again.  The other frustrating thing was that I couldn’t figure out how to make my default profile include the bin folder noted above in my path.  So, I always had to execute epmautomate with a path or drill into the path to make it work.  I also had to prefix it with a dot to get it to run.

You may have known enough to know you had to set your environment variables.  You may have known that a file existed, or should exist ,named .profile in your home directory.  You may have even known how to edit that file, and that it is hidden (anything with a dot in front is a hidden file).  If you went down this path, tried to update your profile and it didn’t work, here is why.

If your default scripting language is Z shell, or zsh, it looks to a profile in .zprofile, not .profile or .bash_profile!  There are several editors, but the easiest one for me to use is nano.  Open terminal and enter the following command to edit/create the profile

nano .zprofile

Update .zprofile

When you open the file, it will likely be blank.  There are two things we need to add.  First create your JAVA_HOME variable.  It will look something like this.

export JAVA_HOME='/library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home’

If you don’t have a Java SDK installed, download and install the latest JDK.  Once downloaded, type the following into terminal, which will provide the path of the most recent version if multiple versions are installed.

/usr/libexec/java_home

If you want a specific version, add the version to the end of the command, like this.

/usr/libexec/java_home -v 1.8

Next, add the path of where you unzipped EPMAutomate to your path.  This should include the bin directory.  Your path may be different than what I chose.  The following will update the path variable to append your EPMAutomate path.

export PATH=$PATH:~/epmautomate/bin

Your .zprofile will look something like this.

Hold the Control key down and hit X.  This will exit and ask you to save your changes.

Test Your Variables

Close your terminal window and open a new one.  Your profile should load now.  You can check that the variables are loaded by entering the following.  Each should return the variable. The Java_Home should return your java path, and  the Path should include your EPMAutomate directory.

echo $JAVA_HOME
echo $PATH

At this point, if you enter epmautomate.sh, it should return the current version installed.

That’s A Wrap

There you go, use it just like on Windows!  Rather than epmautomate or epmautomate.bat, you enter epmautomate.sh.  Other than that, everything is pretty much the same.  It seems simple, but it took me forever to figure out how to add my variables.  All my script templates are updated and I am loving it!

 




Adventures in Groovy – Part 46: Start Making Rules More Reusable, Part 1

One thing that I have spent a lot of time on is making calculations independent of forms so that they can be used on any form.  For example, rather than hard coding a script to look at a form with one column header and one row header, I am now building things to be dynamic so that the POV, the rows, and columns all are read dynamically and identifying the edited cells is independent of the source it is looking at.  This will be a multi-post article because there is a lot to cover.

Think Different

I have not talked about some of the, what might seem like, less functional classes and methods in the API.  Most of my examples don’t go into their use.  The reason is solely trying to break concepts apart and try to not overload people with everything at once.  What if I told you you could eliminate most of your substitution variables?  What if I said you only need one data map?  What if I told you that you could use rule properties like never before?  Spoiler alert, you can!

Building The Foundation

The first concept I want to share is a simple one, but it is the start of making your scripts dynamic.  When working with grid builders, a reference has to be made to the plan type.  This is a very simple thing but has to be changed for different plan types.  You might first think to put this in a script and embed the script, but there is an easier way that makes it completely independent of the plan type or application.

Everything I have shown you with grid builders starts with this.  This means that every rule has to have a hard-coded plan type in it.

Cube cube = operation.application.getCube("plantypename")

Well, this isn’t the case.  It can be done without hard coding the play type name.   The cube variable can be set by getting the cube that the rule is created in so the rule will work on any plan type in any application.

Cube cube = rule.getCube()

I can take that one step further and eliminate the cube variable all together.

DataGridBuilder builderSubmit = rule.getCube().dataGridBuilder("MM/DD/YYYY")
//or
DataGridBuilder builderSubmit = rule.cube.dataGridBuilder("MM/DD/YYYY")

Grid builders aren’t the only class that uses this.  If you are doing anything with metadata, this will also benefit those scripts.

Dimension productDim = operation.application.getDimension("Period", cube)
// can be changed to
Dimension productDim = operation.application.getDimension("Period", rule.cube)

That’s A Wrap

We have access to all kinds of things that we can make use of now through these classes. The application class exposes the currency mode and the default currency.  We have access to the smart lists and can access those.  Could we use those in calculations?  Maybe it is used as a map, where label and description is an account conversion between plan types?  We can get the dimensions, so a calculation could see if a dimension exists.  Maybe we can dynamically create fix statements based on the dimensions in the cube (aggregate everything that is sparse).  User variables can be set, so maybe if a calculation runs and the user variables aren’t set, we ask the user to set it with an RTP, then continue the calculation?

The rule class has methods to get the description, the name, and the rule properties.  I can’t say I have done it, but maybe we used the description as a variable?  Maybe we have the name in a convention that specific characters mean something and are brought into the rule, like a the scenario name?

Some of these are just thoughts, and some of them are things that we could implement and use.  My point is that there are all kinds of things we have access to dynamically that we didn’t before.  The apps I am building don’t have variables for the open periods anymore because I can get them dynamically in the calculation based on the scenario being calculated.  No more start and end month.  No more current month.  And, if they are needed for reports or forms, have the calculation set them if they are wrong.

So, what are you thinking?  Do you have something you have done that you couldn’t do before Groovy?  Share it by commenting.




Updating EPM Automate Just Got Easier

Introduction

One of the challenges with EPM Automate has been eliminated this month.  Although it was a minor issue, the need to update EPM Automate regularly was something that had to be considered monthly. Administrators of PBCS do not always have access to the on-premise footprint, like a Windows VM, that runs the automation.  Even more frequently, access to the production VM is only available to IT staff, so updating that environment is more strict, and has to be scheduled.  That schedule doesn’t always sync up to the changes in PBCS.

Update Command

As of the 02.18 release, a new command is available.  The “update” command will automatically download, and silently install, the newest version of the EPM Automate utility.  Once logged in, execute the following command.

epmautomate upgrade

If you are a frequent visitor, you know I am a fan of PowerShell.  All the automation I do with EPM Automate in the Windows environment utilizes this free scripting tool.  This command has been added to all my new projects so there is no manual effort in keeping the utility current.  This also eliminates any issues that pop up due to incompatibility issues with PBCS.

In my reusable scripts, this new function has been added.

function EPMA_Upgrade{
  $CmdLine = "upgrade"
  $ReturnCode = Start-Process "$EPMAutomate_Path\epmautomate.bat" $CmdLine -Wait -passthru -WindowStylee $ShowDosWindow
  LogResult "EPM Automate has been updated" $ReturnCode.ExitCode
}

During a nightly process, the function is referenced.  If the request fails, the administrators are emails.

$ReturnResult = EPMA_Upgrade
if( $ReturnResult -eq 1){Send_Email_Error;Exit}

Summary

This is a welcome addition.  Now, administrators and application owners don’t have to worry about using new features or keeping EPM Automate in sync with the active version of PBCS.  As the great, Forrest Gump would say – “One Less Thing.”




Supercharge PBCS with PowerShell

Last year I presented an in-depth overview on PowerShell and how it can be utilized in the Hyperion environment.  I have been asked many times to share it. The presentation is a technical presentation and is meant to provide a strong introductory level foundation for anybody that wants to start using PowerShell to automate repetitive tasks.  I have built a large library of shared functions that can be used to automate PBCS and ePBCS, and I plan to share pieces of this in future posts.

For now, anybody that is interested in learning PowerShell, or has used it and doesn’t know why some things work and others don’t, this might prove to be a valuable resource.

 




Remove Dimensions From Planning LCM Extracts

Problem

I am currently working with a client that is updating a planning application and one of the changes is to remove a dimension.  After the new application was setup and the hierarchies were modified to meet the objectives, migrating artifacts was the next step.  As many of you know, if you try to migrate web forms and composite forms, they will error during the migration due to the additional dimension in the LCM file.  It wouldn’t be a huge deal to edit a few XML files, but when there are hundreds of them, it is extremely time consuming (and boring, which is what drove me to create this solution).

Assumptions

To fully understand this article, a basic understanding of XML is recommended.  The example below assumes an LCM extract was run on a Planning application and it will be used to migrate the forms to the same application without a CustomerSegment dimension.  It is also assumed that the LCM extract has been downloaded and decompressed.

Solution

I have been learning and implementing PowerShell scripts for the last 6 months and am overwhelmed by how easy it is to complete complex tasks.  So, PowerShell was my choice to modify these XML files in bulk.

It would be great to write some long article on how smart this solution is and overwhelm you with my whit, but there is not much too it.  A few lines of PowerShell will loop through all the files and remove the XML tags related to a predefined dimension.  So, let’s get to it.

Step 1 – Understand The XML

There are two folders of files we will look to.  Forms are under the plan type and the composite forms are under the global artifacts.  Both of these are located inside the resource folder.  If there are composite forms that hold the dimension in question as a shared dimension, both will need to be impacted.  Scripts will be included to update both of these areas.

Inside each of the web form files will be a tag for each dimension, and it will vary in location based on whether the dimension is in the POV, page, column, or row.  In this particular example, the CustomerSegment dimension is in the POV section.  What we want to accomplish is removing the <dimension/> tag where the name attribute is equal to CustomerSegment.

For the composite forms, the XML tag is slightly different, although the concept is the same.  The tag in composite form XML files is <sharedDimension/> and the attribute is dimension, rather than name.

Step 2 – Breaking Down the PowerShell

The first piece of the script is just setting some environment variables so the script can be changed quickly so that it can be used wherever and whenever it is needed.  The first variable is the path of the Data Forms folder to be executed on.  The second is the dimension to be removed.

# Identify the source of the Data Forms folder and the dimension to be removed
# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |

The next piece of the script is recursing through the folder and storing the files in an array.  There is a where statement to exclude directories so the code only executes on files.

# List all files, recursively, that exist in the path above
$files = Get-ChildItem $lcmSourceDir -Recurse | 
where {$_.Attributes -notmatch 'Directory'} |
Step 3 – Removing The Unwanted Dimension

The last section of the script does most of the work.  This will loop through each file in the $files array and

  1. Opens the file
  2. Loops through all tags and deletes any <dimension/> tag with a name attribute with a value equal to the $dimName variable
  3. Saves the file
# Loop through the files and find an XML tag equal to the dimension to be removed
Foreach-Object {

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//dimension”) |
Where-Object {$_.name -eq $dimName} | ForEach-Object {
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
$xml.save($_.FullName)
Write-Host “($_.FullName) updated.”
}

Executing The Logic On Composite Forms

The above concepts are exactly the same to apply the same logic on composite forms files in the LCM.  If this is compared to the script applied to the web forms files, there are three differences.

  1. The node, or XML tag, that needs to be removed is called sharedDimension, not dimension. (highlighted in red)
  2. The attribute is not name in this instance, but is called dimension.  (highlighted in red)
  3. We have added a counter to identify whether the file has the dimension to be removed and only saves the file if it was altered.  (highlighted in green)
The Script
$lcmSourceDir = "Z:\Downloads\KG04\HP-SanPlan\resource\Global Artifacts\Composite Forms"
$dimName = "CustomerSegment"
# List all files
$files = Get-ChildItem $lcmSourceDir -Recurse | where {$_.Attributes -notmatch 'Directory'} |
# Remove CustomerSegment
Foreach-Object {
  # Reset a counter to 0 - used later when files is saved
  $fileCount = 0

$xml = Get-Content $_.FullName
$node = $xml.SelectNodes(“//sharedDimension“) | Where-Object {$_.dimension -eq $dimName}  | ForEach-Object {
#Increase the counter for each file that matches the criteria
    $fileCount++
# Remove each node from its parent
[void][/void]$_.ParentNode.RemoveChild($_)
}
# If the dimension was found in the file, save the updated contents.
  if($fileCount -ge 1) {
$xml.save($_.FullName)
Write-Host “$_.FullName updated.”
    }
}

Summary

The first script may need to be run on multiple plan types, but the results is an identical folder structure with altered files that have the identified dimension removed.  This can be zipped and uploaded to Shared Services and used to migrate the forms to the application that has the dimension removed.

The scripts above can be copied and pasted into PowerShell, or the code can be Downloaded.




Use PowerShell to split large files by month/year for data loads into FDMEE on PBCS

If you are using PBCS, you may run into some challenges with large files being passed through FDMEE.  Whether performance is an issue or you just want to parse a file my month/year, this script might save you some time.

The Challenge

I recently had the need to break apart a file.  The source provided one large text file that included 2 years of data that was needed to populate the history of an employee metrics application.  The current process loaded files by month and we wanted to be able to piggy back off the existing scripts to load and process data in FDMEE and the monthly Planning data pushes to the ASO reporting cube. So, the need break the data file into seperate files by month and year was required.  The file was delimited and formatted like the following.

Entity,Year,Scenario,Period,Account,Date,Employee,Pay Code,JobNumber,Data
BU1005,FY15,Actual,Feb,Pay Amount,02/02/2015,V1398950,P105,,108.10
BU1005,FY15,Actual,Feb,Pay Amount,02/03/2015,V1398950,P105,,108.92

The goal was to have a file for every unique month and year combination that included only the lines of the relevant time periods.  The header of the file also had to exist in each of the smaller files.  Since we were working on a Windows machine, we used PowerShell to script the solution.

Powershell Script Directions

The script is pretty simple to use and understand.  Update the script as follows.

  1. Create a new text file with a ps1 extension and paste the following into that file.
  2. Update the srcFile variable to point to the file to be parsed.
  3. Update the startYear to the first year in the file to be extracted.
  4. Update the currentYear variable to the last year in the file to be extracted.
  5. Update the ProcessName to a meaningful word or phrase that will be used to create the file name.
  6. Save the file and execute it like any other PowerShell script.

This will produce 12 files for each year with the header line and the data related to the month and year that represents the year and month in the file name.

Disclaimer

I welcome feedback on improving performance and will give credit to anybody that can improve on this.  I am NOT an expert in PowerShell and I am sure there are faster ways to accomplish this.  This created 12 files (1 year / 12 months) from a file that includes 7.8 million records and completed in 24 minutes.  So, this is pretty reasonable for one-off requests, but might need attention if it was a repeatable need.

This was developed using PowerShell 5 and some functions do not work in earlier adoptions of the software.

Powershell Script

#######################################################################
# Set the file to parse
# 
# Set the start year and end year
# 
# Change the counter if you want the files produced to start at
# something other than 1
#######################################################################
# Write a status to the screen to monitor progress
write-host "Processing started at $($(Get-Date).ToShortTimeString())"

# Update to point to the source file
$srcFile = "C:\Oracle\GCA\Data\Files\2015 Time Data\Time_DataPayAmount2015.csv" 

# Set to the first year you want to process
$startYear = 2015 
# Set to the last year you want to process
$currentYear = 2016 

# Used in the naming, is the starting number in name and increments by 1
$counter = 1 

# Get the first line (the header line) of the file 
$Header =  Get-Content $srcFile -First 1 

# Set the process name used in the file name 
$ProcessName = "Test Process" 

# Loop through each year in the range 
ForEach ($Years in $startYear..$currentYear )
 {
   # Loop through each month of the year
   ForEach ($months in 1..12 )
   {
     # Get the 3 month abbreviation of the month being processed
     $ShortMonth = (Get-Culture).DateTimeFormat.GetAbbreviatedMonthName($months)

     # Format year to FYxx (This is typically required on a Planning application)
     $FormattedYear = "FY" + 
     $Years.ToString().substring($Years.ToString().length - 2, 2)

     # Set the file name to a number starting with 1, the Month, and the year
     # Example: 01_ProcessName_Jan_2015.txt
     $FileOut = "{0:00}" -f $counter++ + "_" + $ProcessName + "_" + 
     $ShortMonth + "_" + $Years + '.txt'

     # Write out the header to the newly created file file
     $Header | out-file -filepath $FileOut -Encoding utf8  

     # Write out all the lines that match the month and year. The pattern 
     # includes a ".*" which is the equivalent of an AND conjunction, so 
     # the line has to include the processing year AND processing month 
     # for it to be included.
     select-string $srcFile -pattern "${FormattedYear}.*($($ShortMonth))" | 
     foreach {$_.Line} | out-file -filepath $FileOut -Encoding utf8 -Append

     # Write a status to the screen - this is not required but provides a level
     # of the current progress by communicating the Month/Year completed and the
     # time it completed  
     write-host $fileout "Completed at $($(Get-Date).ToShortTimeString())"
   }
 }

Conclusion

Hopefully this will benefit the community.  As I create more scripts like this, I plan to share them.




Automating SQLCMD Statements

Recently, I wanted to automate the process of loading monthly csv data files into a SQL table. I spent some time researching the syntax of SQLCMD as well as reading multiple different posts on the subject. After some trial and error, I was able to get the automation functioning properly. Here’s a quick summary.

SQL Statement

Here’s the syntax for the SQL query I was using to upload the monthly files:

csv_input.txt

DATABASE NAME, TABLE, & DATA_FILE_LOCATION.csv are query specific. For my instance I used the following:

  • DATABASE NAME: HYP_TEST
  • TABLE: HYP_STG_AP_DATA
  • DATA_FILE_LOCATION.csv: ‘G:\Data\AP\AP Aging FY14\ AP Aging Jan FY14.csv’

Open notepad and paste the query into the blank sheet. Next, save this file as a Microsoft SQL Server Query (.sql) file:

SQLCMD Batch File

To automate this process, a batch file will need to be created that calls the L_AP.sql file. Here’s the syntax to be used in the batch file:

SQLCMD -S server -U login_id –P passwordi input_file -o output_file

Notes on the syntax:

For more detailed information on the SQLCMD syntax – https://msdn.microsoft.com/en-us/library/ms162773.aspx

Once you have tested SQLCMD in the command prompt to verify that it runs, paste it into notepad and save the file as a Windows Batch (.bat) file:

I also created the ‘Logs’ folder, which is where SQLCMD will write the output file (as specified after –o in the SQLCMD command line). The output file will be created as .txt:

At this point, the process should be good to go and you can replicate this setup for other SQL queries using sqlcmd.

As a result of this exercise, a years’ worth of data can be loaded all at once rather than having to upload the monthly data files one at a time. This saved a good amount of time that had previously been spent manually uploading monthly files during the incremental process of data validation.




Essbase (ASO): Clearing Data Using CrossJoin

Having been working on an ASO project for the last couple of months, I have learned a lot about Essbase and its related software. One of the things that gave me trouble at first was the syntax of CrossJoin in ASO’s MDX language. More specifically, I was having trouble trying to nest multiple CrossJoin’s together when I was trying to clear out a certain portion of data that included more than two dimensions. A CrossJoin is simple; it returns the cross-product of two sets from two different dimensions. But what if I want to return the cross product of four different dimensions? That one proved to be a little trickier

Before tackling a CrossJoin of four dimensions, let’s look at a the basic syntax of the function. Below is an example from Oracle’s documentation of a simple CrossJoin (it is using members from the Year & Market dimensions in the Sample Basic outline):

Notice that all 4 possible combinations of the members are returned by the CrossJoin. This is straightforward enough. The part I had the most trouble with occurred when I started trying to include multiple CrossJoin’s in the statements. All of the statements & brackets seemed to jumble together. I figured it would be most helpful to provide examples of what I ended up using so that you can see the syntax:

Note: Since these CrossJoin’s were used to delete data, all of the members being selected are Level 0. I’m also using the ASOsamp Sample ASO outline to demonstrate the functionality.

Simple CrossJoin:

  • CrossJoin({[Original Price]},{[Curr Year]})

Double CrossJoin:

  • CrossJoin(CrossJoin({[Original Price]},{[Curr Year]}),{[Jan]})

Triple CrossJoin:

  • CrossJoin(CrossJoin(CrossJoin({[Original Price]},{[Curr Year]}),{[Jan]}),{[Sale]})

Above demonstrates a sample syntax that will use CrossJoin to bring together four different dimensions. However, it is only grabbing one member from each dimension. To grab multiple members from a dimension, there are family functions that can be used (.Parent, .Children, Ancestor(), Cousin(), .FirstChild, .LastChild, .FirstSibling, and .LastSibling). For this example, I’m using the .Children function.

Triple CrossJoin Using Children Function:

  • CrossJoin(CrossJoin(CrossJoin({[Measures].Children},{[Curr Year]}),{[Jan]}),{[Sale]})

Notice how [Original Price] was replaced with [Measures].Children. Rather than returning one member, this will return the following children from the “Measures” dimension:

Utilize these family functions to increase the CrossJoin function’s returned set.

One more thing to note. Substitution variables can be included within a CrossJoin. For example, let’s say I created the subvar &CurrMonth. I can replace Jan in the code, thus making the month variable.

Triple CrossJoin Using Substitution Variable:

  • CrossJoin(CrossJoin(CrossJoin({[Measures].Children},{[Curr Year]}),{[&CurrMonth]}),{[Sale]})

MaxL Syntax to Clear Data from a Region:

Next, let’s cover the MaxL syntax to clear data in the region specified by the CrossJoin. From the Essbase technical reference 11.1.2.3 pg. 894:

The entire CUBE-AREA portion must be enclosed in single quotes i.e.CUBE-AREA, which will define what portion of the database is going to be cleared.

And here are some more detailed notes on the syntax (also from the Essbase technical reference 11.1.2.3 pg. 897):

Based on the information laid out above, the line that I used in my MaxL statement turned out to look like this:

  • alter database ASOsamp.Sample clear data in region ‘CrossJoin(CrossJoin(CrossJoin({[Measures].children},{[Curr Year]}),{[&CurrMonth]}),{[Sale]})’ physical;

Prior to reloading data, I use this command line to clear the database in this region. This is to make sure that there is no stray data leftover in the cube that might cause discrepancies later on.

Feel free to leave any tips/advice on a more efficient method of utilizing CrossJoin!




Password Encryption – Business Rule Batch Files

I recently learned the importance of encrypted passwords in batch files. Without a password file, the scripts will still run, but the user is prompted to input a password in the command prompt after initialization. Encrypted passwords allow for the automation of these scripts. Shout out to Sumit Deo for his patience in guiding me through the initial process and helping me with my batch scripting skills along the way.

From Oracle’s documentation, a password file in business rule batch files is optional.

However, when executing the batch, the user will be prompted to input the password:

In order to automate this batch file, the password file becomes a necessity. We could put the password in a .txt file and reference that, but for security purposes it makes more sense to encrypt the password. To create an encrypted password, use the “PasswordEncryption.cmd” Windows command file which is located at D:\Oracle\Middleware\user_projects\f oundation1\Planning\planning1

In this folder, create a new folder called Password and save a blank notepad file titled Password.txt. Next, open up a command prompt and enter the following command (the first half calls the password encryption file & the 2nd half is the path and file name where the encrypted password will be saved:

D:\Oracle\Middleware\user_projects\f oundation1\Planning\planning1\PasswordEncryption.cmd D:\Oracle\Middleware\user_projects\f oundation1\Planning\planning1\Password\password.txt

Upon hitting enter, the screen will prompt for a password to be encrypted. Type in the password (nothing will appear on the screen) and hit enter again.

The screen will display as follows, noting that the password has been encrypted to the desired location & file:

Check the password file for the encrypted password:

The next step is to include the encrypted password into the business rule batch file, so that the rule will run automatically when called from the script. The syntax for the command is as follows:

CalcMgrCmdLineLauncher.cmd [-f:passwordFile] /A:appname /U:username /D:database [/R:business rule name | /S:business ruleset name] /F:runtime prompts file [/validate]

We will be focusing on the -f:passwordFile portion of the command. To specify where the encrypted password is stored, -f:passwordFile becomes:

The %CALCLAUNCHER% variable is equal to D:\Oracle\Middleware\user_projects\f oundation1\Planning\planning1

Now that the encrypted password has been inserted into the command line of the business rule batch file, the batch will run to completion without stopping to ask the user to input a password. This comes in very handy when attempting to automate multiple tasks in one batch script.