1

Efficiently Handling Creating Blocks

You known what they say? Don’t believe everything you read on the internet. Creating blocks with DATACOPY is only the second slowest way you can do it. Do you want to know how to create a calculation using a faster performing technique to create blocks while calculating the members in those blocks?



Introduction

I see hundreds of examples of how to deal with block creation. It almost always is an issue when I am asked to get involved in projects that have performance issues. You will never hear me say “best practice.” There are reasons we all do things that go against what Oracle will say is a best practice. We have larger block sizes to improve reporting performance and we know it causes slower calcs. If the calc runs at night nobody cares if it runs 10 minutes or 60 minutes. We go against the grain on dimension order when data is sparse and performance on calculations is improved by ignoring the “ideal” order.

Block creation can be a bear if you don’t understand what blocks are or don’t understand how they get created. There are a bunch of ways to do it and there are many options that can be used. I want to educate on what a block is and show how to handle it the most efficient way because 9 out of 10 people don’t know how to handle it efficiently. I don’t know why, because it is easy to figure out if you think about it and understand some basic concepts, and even easier to implement it.

If you have a better way to do it share your techniques. Hopefully I don’t say anything incorrectly and I shed some light on block creation for the majority of the people reading this.

What Is Block Creation

Essbase stores data in blocks. A block is the combination of dense dimensions. There is a block for every stored combination of sparse members. If you don’t know how blocks work, I highly recommend you read Sparse, Dense, and Blocks for Dummies. If a block doesn’t exist, it will only get created if you tell Essbase to create it. I will get into how to do this shortly.

When blocks don’t exist, a lot of calculations appear to work intermittently. If you have this issue, it is likely due to the fact that it works when blocks already exist and doesn’t when they don’t exist. You can easily tell by loading a number to one combination of the dense members and run the calc again. Submitting a value will create the block if it doesn’t exist. If it calc works after you submit some data, it is most likely the fact that the block was not there and is now.

Block creation is something you will likely encounter on every project. How do deal with it is what I want to tackle.

Using DATACOPY – What Most Teach You

Most people create blocks by using DATACOPY. I just read a post on LinkedIn that sparked my interest in writing this because I find it terribly frustrating when people/companies represent themselves as experts and teach people inefficient ways to do things. I have most certainly done this in my life in an effort to help people, but this topic is so basic and polarizing for me because it is SO EASY to do it right. By right, I mean less code, faster execution, and easier to write in the first place. Here is likely what you have heard.

In this example, marketing expenses needs to be calculated as 5% over actuals. The method suggested is to copy data from Actual to Budget, which duplicates every Actual block for Budget. Then you wipe out the data in Budget. Then you set the data with your calculation to make Budget 5% more than Actual. It looks something like, excluding the FIX statements.

DATACOPY "Actual" to "Budget";
"Budget" = #Missing;
"Budget" = "Actual" * 1.05;

To reiterate, if you would run a calculation on Budget and no blocks existed, it would likely do nothing. If the calculation FIXes on Budget, and no blocks exist for Budget, it does nothing. If it already has some of the needed blocks, the calculation will appear to work on some of the data but not on others. To get around this, people will copy all the data from Actuals to Budget so all the needed blocks exist. Then they will clear the data from Budget. Then they will loop through Budget and run the logic. The result is this.

FIX([your fix statement], "Marketing Expenses")
  DATACOPY "Actual" to "Budget";
ENDFIX
FIX([your fix statement], "Actual")
  "Marketing Expenses" = #Missing;
ENDFIX

Does this work? Absolutely. Is it efficient? Not even close. A significant reason for calculation performance is looping through the same block more than once. Sometimes it is necessary, but think of it this way. If you had 50 Excel workbooks and you had to update all 50 of them. Would you open each one and edit one cell, then open them again and update another cell? No. You would open it and update all the cells, close it, and never open it again. You want your calculation to perform the same way. You want it to open a block, edit everything in that block, and never open it again. I understand that is not always possible, so I am not suggesting this is a hard and fast rule.

This calculation, as it is, runs through the blocks 3 times. Each fix represents a and is very inefficient. It also creates blocks regardless of whether Marketing Expenses has data or not.

Don’t use this method!

Using CREATEBLOCKONEQ – Hopefully Nobody Teaches You

Another method is to use SET CREATEBLOCKONEQ in your FIX statement. Using this means we don’t have to use DATACOPY and we don’t have to set the result to null using #Missing. It makes the calculation smaller and would look like this.

FIX([your fix statement], "Budget")
  SET CREATEBLOCKONEQ ON;
  "Marketing Expenses" = "Marketing Expenses"->"Actual" * 1.05;
  SET CREATEBLOCKONEQ OFF;
ENDFIX

This will work for you. It will create the blocks as they are needed. But, it will likely take longer than the original example. Why? Without this setting, the calculation will go through every EXISTING block. It will calculate any block that exists, but will skip the blocks that don’t exist and you will get inconsistent results.

When we set CREATEBLOCKONEQ to ON, it will check every possible block and if it needs to be created it will. The problem with this is that unless your cube is freakishly dense, it will do a lot more than it needs to. Think of putting your car in 1st gear and driving on the interstate for 10 miles. You will go extremely slow and use an enormous amount of gas compared to driving it 5th gear, in which you would get there faster and use very little gas. That is what CREATEBLOCKONEQ does.

Look at your cube statistics and check out the difference between the number of blocks and the number of possible blocks. This runs on all the possible blocks and is likely millions, if not billions of blocks that have no data.

Since the performance on this is really bad, it is rarely used. That said, if you need to run a calculation on a specific combination, like 1 block, or 10 blocks, using this is an easy way to minimize your effort and the additional time by using this is not material, and the blocks have to be created anyway. This would be like putting the car in 1st gear and driving up your driveway. Not super- efficient, but it is only 5 seconds, so who cares.

An Alternate Approach Nobody Talks About

First, will this work in every situation? No. Will it work in 99.9% of them? Yep! This example assumed Scenario is a sparse dimension.

We have established that if we fix on blocks that don’t exist, nothing will happen. Is it necessary that we create the blocks first, then calculate them? Absolutely not. We can create the block by calculating it.

Blocks will get created IF the left side of the equation is a sparse member. If you must have a cross dim operation, and the left most member of the cross dim is sparse, it will create the block. In the original example, we wanted to grow Marketing Expenses 5% over the prior year. If we fix on Actual, where all the blocks that we need grow the expense exist, then we set Budget to 105% of Actual, the block will get created. All of the blocks will get created if they don’t already exist. If there is no block for Actual, then Budget will be 0 anyway, so we don’t have to worry about if the right blocks exist for Actual.

FIX([your fix statement], "Actual", "Marketing Expenses")
  "Budget" = "Marketing Expenses" * 1.05;
ENDFIX

If you think about this, we are running through all the Actual blocks, and then we set the Budget to a 5% more than Actual. Rather than fixing on the calculations, or destination, and calculating the destination, we fix on the source, where the blocks exist, and set the destination to a value. Since we have a sparse member on the left side of the equation, and we are FIXing on Actual, where all the blocks are, we will NOT be skipping anything and you should get a total Budget of 5% more than Actual.

Will this create too many blocks? Possibly. We may not have Marketing Expenses in all the existing Actual blocks. We can optimize this by adding a IF to check to see if Marketing Expenses is either a 0 or #Missing. Rather than checking for both, I just add a 0 to it and then check for 0. 0 + #Missing is 0 and it is just a little more efficient to process and write.

FIX([your fix statement], "Actual")
  "Marketing Expenses"(
  IF("Marketing Expenses" + 0 <> 0)
    "Budget"->"Marketing Expenses" = "Marketing Expenses" * 1.05;
  )
  ENDFIX
ENDFIX

To summarize, we

  • FIX on Actual
  • Only calculate Budget if Marketing Expense is not zero and not #Missing
  • Have a sparse member on the left side of the equation
  • Have a sparse member on the left most side of the cross dimensional operator This will likely result in less blocks created as it only creates blocks where Marketing Expenses is #Missing or 0 but will create and calculate any block need to be created at the same time. What is the benefit of this? The calculation is easier to write, it takes less time to write, and it is much more efficient to execute.

Results Of Real-World A Dataset f The calculations are plain Jane. I am not using threading or any other settings to keep it simple and ensure I am comparing apples to apples.

The database I have has more data in the Plan scenario so I am replicating the same logic but using different scenarios. I need set the BlockCreationTest scenario to 5% more than the OEP_Plan scenario, rather than the prior example of setting Budget to 5% more than Actual. Different scenarios but the exact same concept.

DATACOPY Method

The calculation to copy data to create blocks, set destination to missing, then calculate the growth FIXing on the destination, will look like this.

FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0))
  DATACOPY "OEP_Plan" TO "BlockCreationTest";
ENDFIX
FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  "Regular_Cases" = #Missing;
ENDFIX
FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  "Regular_Cases" = "Regular_Cases"->"OEP_Plan" * 1.05;
ENDFIX

The data copy took 41 seconds. Updating the destination to #Missing took 5 seconds. The calculation took 42 seconds. The total time was 88 seconds and it created 66,724 blocks. This went through the blocks 3 times.

FIX On Source And Calculate Destination Method

This time, I just FIXed on the source, and used the blocks on the source to create and calculate the destination. The calculation looks like this.

FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "OEP_Plan")
  "Regular_Cases"(
  IF( "Regular_Cases" + 0 <> 0)
    "BlockCreationTest"->"Regular_Cases" = "Regular_Cases" * 1.05;
  ENDIF
  ) 
ENDFIX

This created slightly fewer blocks in 63,739. The difference means that Regular Cases didn’t have a value in all the existing source blocks. The calculation took 6 seconds. That is an improvement of over 8,000%! This method created the necessary blocks and calculated the correct values in 6 seconds compared to 89 seconds using the other method. It went through the blocks one time.

Optimized DATACOPY

Just to be fair, I optimized the calculations for the DATACOPY methodology. The example provided in the article that prompted me to write this was inefficient. I wanted to squash any comments that would suggest the DATACOPY is just as fast if it was written efficiently.

FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "Regular_Cases")
  DATACOPY "OEP_Plan" TO "BlockCreationTest";
ENDFIX
FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  "Regular_Cases" = #Missing;
ENDFIX
FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  "Regular_Cases"(
  IF( "Regular_Cases"->"OEP_Plan" + 0 <> 0)
    "Regular_Cases" = "Regular_Cases"->"OEP_Plan" * 1.05;
  ENDIF
) 
ENDFIX

The results were better. The data copy took 6 seconds. Updating the destination to #Missing stayed the same and completed in 5 seconds. The calculation of BlockCreationTest took 7 seconds.

That is still 18 seconds, which is 2.5 times slower than FIXing on the source and calculating the destination. I don’t know why you would ever need the step that sets everything to #Missing because it would get overwritten with the third step, but would be 0 anyway. Even if that step gets removed, this method is still twice as fast.

CREATEBLOCKONEQ ON Method

Before I get into the results, take a look at these statistics. The FIX statement runs on

  • 13 Sources
  • 23 Departments
  • 2,382 Companies
  • 25,001 Products

13 x 23 x 2,382 x 25,001 is 17,806,162,218 if my math is correct. That is possible blocks that can exist. If you remember, the calculations above created 66,724 and 63,739 blocks for the one without the if and with the if, respectively. If we extrapolate out the results for a calculation that took 6 seconds that iterated through 66,724 blocks (we know this because it created a block for every source block that existed), to run through 17 billion blocks, it will take an estimated 500 hours! Remember, turning this on will go through ever possible block and if it needs to be created it will create it. If not, it won’t. The calculation is as follows.

FIX("FY18", "Regular_Cases", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  SET CREATEBLOCKONEQ ON;
  "OEP_Working" = "OEP_Plan" * 1.05;
  SET CREATEBLOCKONEQ OFF;
ENDFIX

The calc had to be adjusted slightly because even with this setting on, it only creates the block if a sparse member is used on the left side of the equation. I moved OEP_Working from the FIX to the left side of the equation and moved BlockCreationTest to the FIX. I did this in good faith because there is ONLY data in Working, but there is data in multiple Scenarios, so this should run faster than if I left it the way it was in the other calculations.

I stopped this calculation after 5 hours and it only created 223 blocks at that time. If I extrapolate that out, it would take 1,500 hours to finish. Even though I only need to create 66k blocks, I have to go through 17B to figure out which ones need to be created, verses fixing on the source and it only running through 66K blocks.

To The Doubters

I get feedback all the time that this method isn’t possible all the time. There are times when it isn’t possible, but in 30 years of doing this, I can think of maybe 5 times I had to work around it. If you want to allocation a number based on history, you fix on the history and set the destination equal to the history * a cross dim to your rate. If you need to allocation based on percentages entered, then you fix on where the percentages are entered and set the destination to the correct value, as the blocks are created when the rate is entered. This doesn’t just apply to scenario to scenario block creation. You may enter a rate at your entity for eliminations, but not at the product level. You still are only going to allocate down to the products that have budget, or history, so you still can fix on where the products have data and use the rate at no product to calculate your numbers.

If you are allocating the data or have a driver that requires calculations to be created, it has to have some driver somewhere that exists at the level you want to allocate FROM, and if you have that, you can use this method.

I Accept Your Challenge

If you have a situation where you are having challenges with this logic and think you have to use DATACOPY, challenge me to come up with a way to do it. I don’t want you to EVER have to use DATACOPY!

A Cautionary Tale

With all this said, is there a drawback? Yes and no. This will always create the needed blocks. Keep in mind when you put a sparse member on the left-hand side and if you don’t have your FIX isolated to only what you need to calculate, you can potentially create a lot more blocks than you want. You will NEVER create a block at every possible block with this method because if you FIX on something, it ONLY calculates where blocks exist.

If you ran the following calculation on an empty database, nothing would happen because there are no blocks that it would execute the calculation on. To prove this, I cleared all the blocks in the Scenario BlockCreationTest and ran the following calculations.

FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "BlockCreationTest")
  "Regular_Cases" = 1;
ENDFIX

This ran in under a second and created no blocks because BlockCreationTest has no blocks.

What Hasn’t Been Discussed

There are two other ways to create blocks.

CREATEBLOCK

First is the @CREATEBLOCK. You can pass a list of members to this to create blocks. I have used this in some situations where I needed to walk balanced to the next year and I wanted to make sure the next year existed. This is just one example. It did add time to the calculation, but wasn’t significant for small datasets. A couple things to keep in mind.

  1. You have to be extremely careful with this because you can blow up your database if used incorrectly.
  2. If you use this within a fix, you only have to pass the member that is different than the block you are on.
  3. In most situations, this isn’t necessary because of the preferred method above.
  4. It will, in most situations, still require an extra pass of the blocks and negatively impact the calculation speed.

Here is an example.

FIX("FY18", "OEP_Working", "USD", "c01",
    @Relative("Source", 0),
    @Relative("Total_Department", 0),
    @Relative("ComputerTech", 0),
    "OEP_Plan")
  "Regular_Cases"(
  IF( "Regular_Cases" + 0 <> 0)
    @CREATEBLOCK("BlockCreationTest");
  ENDIF
  ) 
ENDFIX

Comparing this to the other way to create blocks, this took 40 seconds and created the correct blocks. If you remove the if, however, it will create every possible block combination possible, which is why you have to be extremely careful with this method.

Groovy

The other option is to use Groovy to create the blocks. It would be safer, but probably slower than the @CREATEBLOCK method and more complicated to write and I didn’t even bother to test it.

My Hope

I hope you read this, understand it, and have a bullet proof way to deal with block creation and a more efficient way than you may have ever thought of or been taught. I feel so strongly about this that I am more than happy to have a quick conversation if you are finding it difficult to use it.




Upgrade Or Downgrade To Or From Hybrid In The Cloud

There are benefits to moving to Hybrid, but there are also some challenges.  The content of this post is not around the pros and cons, but the fact that you can upgrade your environment to use it.  If you find it isn’t for you, you can “downgrade” back to BSO.  The flexibility provides everybody the ability to try it.



Recreate Introduction

EPMAutomate comes with a function that allows the ability to restore an environment to a clean slate.  I don’t think this is new to anybody that has used EPMAutomate or EPM Cloud Planning.  What might be a surprise is that it does more than just reset an environment so you can start over.  It can also:

  1. Change the type of Essbase database to Hybrid or a standard BSO.
  2. Temporarily convert a Planning, Enterprise Planning, Tax Reporting, or Financial Consolidation and Close environment to an Account Reconciliation, Oracle Enterprise Data Management Cloud, or Profitability and Cost Management environment.

Using Recreate

The usage of the Recreate command is as follows, which all options.

 epmautomate recreate [-f] [removeAll=true|false] [EssbaseChange=Upgrade|Downgrade] [TempServiceType=Service_type]
  • -f forces the re-create process to start without user confirmation. If you do not use the -f option, EPM Automate prompts you to confirm your action. Be careful using this option.  If you have a long day and aren’t focused, this can make the day a whole lot worse!
  • removeAll, removes all of the existing snapshots, as well as the content of the inbox and outbox.  The default is false, meaning it retains the snapshots and the content of inbox and outbox and nothing is removed.
  • EssbaseChange upgrades or downgrades the current Essbase version in legacy Oracle Financial Consolidation and Close Cloud, Oracle Enterprise Planning and Budgeting Cloud or Planning and Budgeting Cloud Plus 1 environments.
  • TempServiceType temporarily converts an environment to a different service environment.

Changing Your Essbase Version

To change your environment to BSO from Hybrid

epmautomate recreate EssbaseChange downgrade

To change your environment to Hybrid from BSO

epmautomate recreate EssbaseChange upgrade

Trying A Different Service

There are some details that must be understood to use this option and is dependent on the version of the cloud service you have.  For subscriptions other than EPM Standard Cloud Service and EPM Enterprise Cloud Service, meaning PBCS and EPBCS, you can use this option to convert, temporarily, to

  • Account Reconciliation
  • Oracle Enterprise Data Management Cloud
  • Profitability and Cost Management environment

To use this option to convert your environment to something it wasn’t originally intended for:

epmautomate recreate -f removeAll=true TempServiceType=ARCS

To change your environment back to its original service:

epmautomate recreate

For EPM Standard Cloud Service and EPM Enterprise Cloud Service subscriptions, you can use this option to convert to any supported EPM Cloud service.  EPM Enterprise Cloud Service subscriptions use a common EPM Cloud platform. Initially, you can deploy any supported EPM Cloud business process. 

To switch from a deployed business process to another, you must re-create the environment to delete the current deployment and to bring it back to the original EPM Cloud platform. You then re-create it again as the new service type.

For example, if you created an Account Reconciliation business process but now want to create an Oracle Enterprise Data Management Cloud environment, you must run the re-create command twice.

First, reset the service.

epmautomate recreate -f removeAll=true

Second, change the service type.

epmautomate recreate -f TempServiceType=EDMCS

The acceptable service types, currenty, are

  • ARCS (Account Reconciliation)
  • EDMCS (Oracle Enterprise Data Management Cloud)
  • EPRCS (Narrative Reporting)
  • PCMCS (Profitability and Cost Management)

That’s A Wrap

It is great that Oracle allows us to do these things. We have a ton of flexibility, not normally afforded to us in the cloud, to test and use different core database types.  It also allows those using the old SKU to try the new services, or business processes, to see if they might be something you want to purchase. 

If you want to give Hybrid a try, use your test environment and give it a spin.  If you want to get exposed to one of the other business processes, you now have the ability to see it without jumping through hoops.




Working With Planning Formula Expressions

Most of us know that there is a button in the calc rule editor that allows us the ability to select a smart list and the smart list entry.  It will add something [[smartlist name.smartlist entry]].  If this is new to you, what it does is replace reference the smart list and replaces it with the numeric value that exists in Essbase.  The beauty of this is that it is dynamic, so if the smart list is changed in any way, you don’t have to go into your rules and replace change the index values for the smart list entries to match.  Guess what, there are more!



Well, don’t I feel like the F*@$& idiot, to pull a quote from A Few Good Men.

What Is A Planning Formula Expression

As described above, it is an expression that allows you to get valuable information dynamically about artifact properties in a Planning application.  The following formula expressions currently exist.

  • SmartLists
  • Dimensions
  • Planning User Variables
  • Periods
  • Scenarios
  • Cross-References
  • Workforce Cube Year to Date
  • Get ID for String

SmartLists

I already discussed the Smart List, but here is an example if this is new to you.  The calculation manager syntax is [[SLName.entryname]].

FIX (Mar, Actual, Working, FY15, P_000, "111") 
   "Product Channel" = [[Channel.Retail]] ;
ENDFIX

Which would return something like this.

FIX (Mar, Actual, Working, FY15, P_000, "111") 
   "Product Channel" = 2 ;
ENDFIX

Dimensions

The dimension expressions are not all that useful unless you are building calculations that might go across applications that have different names for the 6 required dimensions, plus currency.  Using the following dimension tags, the customized name will be returned when they are added to the dimension expression. The syntax used for this function is [[Dimension("DIM_NAME_ENTITY")]].

  • DIM_NAME_PERIOD
  • DIM_NAME_YEAR
  • DIM_NAME_ACCOUNT
  • DIM_NAME_ENTITY
  • DIM_NAME_SCENARIO
  • DIM_NAME_VERSION
  • DIM_NAME_CURRENCY

An example would look like this. This runs a calc dim on whatever your account dimension is.

CALC DIM([[Dimension("DIM_NAME_ENTITY")]]);

In this application, Entity is named Entity, so the above script returns:

CALC DIM ("Entity");

If the entity dimension was named Cost Center, it would return:

CALC DIM ("Cost Center");

Planning User Variables

Planning user variables return the user variable’s member.  This can be pretty useful if you have variables that are used to do things like fix on their area of a hierarchy.  These can be gathered through run time prompts if they exist in the POV, but that isn’t always the case.  You might use them to show the products, for example, that are under a user variable, in the rows, in which case without Groovy, it can’t be passed in RTPs. The calculation manager syntax is [[PlanningFunctions.getUserVarValue("xyz")]].

An example where the user variable is used to run a calculation might look like this.

FIX (Feb, Actual, Working, E_000, @RELATIVE([[PlanningFunctions.getUserVarValue("Product View")]],0) )
   Revenue = Units * Cost;
ENDFIX

Period Functions

Period(periodName) returns the specified period. The options for this function are and the calculation manager syntax is [[Period("FIRST_QTR_PERIOD")]].

  • FIRST_QTR_PERIOD
  • SECOND_QTR_PERIOD
  • THIRD_QTR_PERIOD
  • FOURTH_QTR_PERIOD
  • FIRST_PERIOD
  • LAST_PERIOD

This example:

FIX ( Mar, Actual, Working, P_000, "6100", FY15 )
   "120" =[[Period("FIRST_QTR_PERIOD")]];  
ENDFIX

would return a script like this

FIX (Mar, Actual, Working, P_000, "6100", FY15) 
   "120" = "Mar";
ENDFIX

The NumberofPeriodsInYear returns the number of periods in the year and NumberofYears returns the number of years in the application. The calculation manager syntax for this is

  • [[NumberOfPeriodsInYear]]
  • [[NumberOfYears]]

The following example

FIX (Mar, Actual, Working, P_000, "6100", FY15)
   "120"=[[NumberOfPeriodsInYear]];
   "120"=[[NumberOfYears]]; 
ENDFIX

would produce this.

FIX (Mar, Actual, Working, P_000, "6100", FY15)
   "120"=12;
   "120"=9;
ENDFIX

Scenarios

This one is my favorite ones.  I have been using Groovy to get these not knowing they existed.  These allow the reduction of if statements and improved performance.  We can get the open periods.  If this go across years, then my groovy solution probably comes back into play.  For ranges that include one year, or even two could be handled, this offers great functionality.  The options available are

  • Start Year
  • End Year
  • Start Month
  • End Month

The calculation manager syntax is as follows.

  • [[getStartYear(“ScenarioName”)]]
  • [[getEndYear(“ScenarioName”)]]
  • [[getStartMonth(“ScenarioName”)]]
  • [[getEndMonth(“ScenarioName “)]]

A use case would look something like this. Assume{rtpScenario} is a run-time prompt variable of type member with a default value of “Actual”:

FIX({rtpScenario}, [[getStartYear({rtpScenario})]]:[[getEndYear({rtpScenario})]],
   [[getStartMonth({rtpScenario})]]:[[getEndMonth({rtpScenario})]])
      FIX ( Working, P_000, "111")
        "5800" = 5500;
      ENDFIX
ENDFIX  

This would build out the following calculation

FIX ("Actual", "FY10" : "FY18", "Jan" : "Dec") 
    FIX (Working, P_000, "111") 
      "5800" = 5500;
    ENDFIX 
ENDFIX

If your open range consisted of two years, you could do something like this

FIX({rtpScenario},[[getStartYear({rtpScenario})]],[[getStartMonth({rtpScenario})]]:”Dec”)
      FIX ( Working, P_000, "111")
         "5800" = 5500;
      ENDFIX
ENDFIX  
FIX({rtpScenario},[[getEndYear({rtpScenario})]],”Jan:[[getEndMonth({rtpScenario})]]
      FIX ( Working, P_000, "111")
        "5800" = 5500;
      ENDFIX
ENDFIX  

Cross-References

This function comes in a few flavors but does something pretty awesome. How it works might change your naming convention a little, or make it more consistent anyway. What id does is generate a cross dimensional reference to our default members, like No Product. The syntax is CrossRef(accountName, prefix, true) but the last two parameters are optional. If you use CrossRef(“Revenue”), it would produce the following, assuming your 6 required dimensions and a product dimension.

"BegBalance"->"No Scenario"->"No Version"->"No Entity"->"No Product"->"Revneue";

I can change my prefix by adding the second parameter. I don’t like having spaces in my member names, so I would do the above with CrossRef(“Revenue”,”No_”) which would produce

"BegBalance"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"Revneue";

If I change my syntax to CrossRef(“Revenue”,”No_”,true) I have a cross dim operator for all dimensions Except Period (uses BegBalance), and Currency, but includes year

"BegBalance"->"No_Year"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"Revneue"; 

Used in a Fix Statement the following example

FIX (Aug, Actual, Working, FY15, P_000, "112")
   "111" = [[CrossRef("5800", "No_", true)]];
ENDFIX

would produce the following script.

FIX (Aug, Actual, Working, FY15, P_000, "112") 
   "111" = "BegBalance"->"No_Year"->"No_Scenario"->"No_Version"->"No_Entity"->"No_Product"->"5800";
ENDFIX

Workforce Cube Year to Date

If you use workforce, you probably have noticed the members it generates to get the month index for both the calendar and fiscal period. These can be used with this function to build a calendar to date value. The syntax is [[CYTD(memberName)]]. If you have renamed “Cal TP-Index” and “Fiscal TPIndex,”, then you have to specifically name the members in two additional parameters and the syntax is [[CYTD(memberName, calTpIndexName, fiscalTPIndexName)]]. This method is really easy to use and looks like this.

Fix (NOV, Actual, Working, FY15, P_000, "112")
   "5800" = [[CYTD("6100")]];
ENDFIX

If the default names are changed, it would look a little different.

Fix (Dec, Actual, Working, FY15, P_000, "112")
   "5800" = [[CYTD("6100", "Cal TP-Index", "Fiscal TPIndex")]];
ENDFIX

Get ID for String

This doesn’t solve all the problems around Smart Lists and text accounts, but it is a step in the right direction. If you don’t know, both of these are held in the Planning repository. Essbase ONLY stores numbers. Look at this as the index to the value you see in Planning. The repository has the map from index to value. In an Essbase calculation, you can’t set a text account to a text value. Well, actually, you can. The syntax for this function, which assigns a text value, is [[PlanningFunctions.getIdForString("text")]]. This allows you to set the value of a text account to a string.

In Planning, you have an account named “acct1 text” that is of type text. You want to copy your values from FY16 Dec to FY17 Mar, and change the text account to “Not Budgeted,” it would look like this.

FIX (Actual, Working, P_000, "210")
   DATACOPY FY16->Dec TO FY17->Mar;
   Mar(
      "acct1 text"->FY17 = [[PlanningFunctions.getIdForString("Not Budgeted")]];
   )
ENDFIX

That’s A Wrap

One last thing. If you use any of these in a member formula, for some reason you have to remove a bracket on each side. So, instead of two, you just need one. I will say I have not tested all of these, but the ones I have tested/used do follow this pattern. Hopefully Oracle keeps expanding these. Although they aren’t as helpful as they were prior to Groovy, they are simpler to use than implementing a Groovy solutions for some of these needs. For you lifers, it is things like this that a newb tells you. Don’t ever think you can’t learn from somebody that “knows nothing.”




Getting The Most From Your Oracle Support

Nobody will argue that Oracle Support is a word class experience.  Like it or not, the service is here to support thousands and individuals can get lost in the process.  If you navigate the process correctly, you can get results.

What Do Do First

Your first step should be to troubleshoot the issue.  Do everything you can for support.  The more you provide, the less “did you plug in the computer” type questions that delay you getting help  you will get.  Be clear and concise.  The first few levels of support are likely less experienced than you and will only be able to provide novice level help most of the time.

Next, make sure the content of your support ticket is complete.  Don’t leave any questions.  What I have found to be extremely helpful is to video the issue.  Make sure you show the problem and walk through the calculation or script or whatever is the source of the issue.  This doesn’t have to be a professional video.  There are free screen recording applications.  There are also some pretty inexpensive ones.  If you use a Mac, ScreenFlick is 35 bucks and well work it.  It gives you the basics.  If you use SnagIt, it also works just fine.

If you are a Windows user, there is a nice free option, but you probably haven’t noticed it, and for good reasons.  If you look through the Start menu, you won’t find any reference to a screen recorder. You’re not missing it… it simply isn’t there. That’s because rather than making the screen recorder a standalone app, Microsoft included it as a feature of the Xbox Game Bar in Windows 10.

Your Second Step

After you open a ticket and provide your video of the issue, proving there is an issue without a doubt, follow these steps

  • Make sure you get a duty manager assigned and push for escalation
  • Call 1-800-223-1711
  • Press 1 and enter your SR number
  • Press 2 to speak with a manager
  • Do not press 1 to speak with Engineer, this will likely result in going straight to Voicemail
  • Tell the Support Hub the critical impact you are facing
  • Request a Duty Manager callback ASAP
  • Repeat request level 2, then level 3, to level n, until you get what you want
  • Call hourly until you get what you want – the squeaky wheel gets the grease

During this process, here are some things make sure you get the most out of the experience.

  • Speak with the support hub manager
  • Describe the impact, include key project milestones at risk and communicate what you need for progress
  • If your issue is CRITICAL you should ALWAYS make a request for a duty manager call back.
  • If you ask for a duty manager call back, wait 30-60 minutes until you receive the call and reiterate the problem
  • Be strategic with your request
  • Don’t just say you need to escalate – describe the impact and communicate what you need for progress
  • Keep in mind, your SR will NOT appear in an “escalated” state unless you negotiate this with a duty manager

That’s A Wrap

Should you have to do all this?  No.  The experience should be better, but yelling about it and banging your fists on the table won’t change anything.  You have to play the game and go through the process.  If you use these tactics, I guarantee you will improve the response.

 

 




Convert a Planning Load File to an Essbase Load File

There are a ton of reasons to convert a planning load file to an Essbase load file.  Maybe you are migrating a file from one environment to another, or simple want to load the file faster, but there are reasons to use the Essbase format.

Oracle is working on an enhancement that should be released in the next month or two that will use a load rule to load data to the app using the Essbase load format, which means the logging will be much improved, it won’t stop at the first failed line, and it will log all the errors, just like the Planning load format.  That is great news for those of us that use the planning format purely for the logging.

Performance

Before I get into the script, I want to touch on the speed of this method.  The file I used, based on a real situation, was over 89 million lines (yes, that is correct, million), and took over 5 hours to load as a Planning file.  It had to be split into three files to be under the 2GB limit, but it successfully loaded.  The file was received late in the morning and had to be loaded before the start of the day, so a 5 to 6-hour processing time was unacceptable.  By the way, yes, the file was sorted appropriately based on the sparse and dense settings.

I was able to build a unix/linux script using awk to convert this file to an Essbase load format and it only took about 9 minutes to convert.  The improved load time was pretty drastic.  It finished in under 15 minutes.

For testing, it was great, and it was perfect to improve the processing until the source system could rebuild the export in the Essbase format.  Just to reiterate, I added less than 10 minutes to convert the file, and reduced the load time by 4.5 hours, so it was worth the effort.

The Catch

Before I continue, if you are unfamiliar as to why the two load formats, here is the difference.  Essbase loads the data directly to Essbase.  The Planning load will bounce the file off the Planning repository to convert any smart list string account to the appropriate number, which is what is stored in Essbase.  This process creates a new file on the server, in an Essbase load format, with the numeric representation of each smart list account.  If you have no smart list conversions, this entire process is done for no reason, which was the case in this situation.  So, this isn’t the answer in every situation.

The Script

Before I get into the script, if you know me you know I love my Mac.  One of the reasons is because I have the performance of a Mac, I can obviously can run Windows whenever I want, and I have the ability to run Bash scripts through the terminal.  I am not a Bash scripting expert, but it is extremely powerful for things like this, and I am learning more as I need to build out functionality.

If you are a Windows user, you can install and use Linux Bash scripting in Windows 10.  You can read about it here.

There are several languages that can be used, but I chose AWK, which is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. It is a standard feature of most Unix-like operating systems.

First the script.  Here is it.  I put the awk on multiple lines so it was a little more readable, but this is one command.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";
HEADERMBR=$(head -1 $FILE | cut -d ',' -f2)

awk -v var="$HEADERMBR" 
     'BEGIN {FS=","; OFS="\t"} 
     NR>1 
     {gsub(/"/, ""); 
          print "\""$1"\"", "\""$3"\"", "\""$4"\"", 
                "\""$5"\"", "\""$6"\"", "\""$7"\"", 
                "\""$8"\"", "\""var"\"", $2}' 
     $SOURCEFILE > $LOADFILE;

There are a few things you will need to change to get this to work.  Update the source file and the load file to reflect the file to be converted, and the file name of the converted file, respectfully.  Inside the AWK script, I have 8 fields, 1 through 8.  This represents the 8 columns in my Planning file, or the dimensions and the data.  Your file might have a different count of dimensions.  If your file has more or less delimited columns (ignore the POV field quotes and assume that each delimited field in that is an additional field), update the script as needed

In this example is a planning file example and each arrow represents a field.  The print section of the awk command changes the column order to fit what the Essbase load format requires.

Breaking down AWK

This won’t teach you everything there is to know about AWK, as I am still learning it, but it will explain the pieces used in this command so you can get started.

This piece is simply creating two variables, the source file and the converted file name, so there aren’t multiple places to make these changes when the script needs updated.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";

The head command in Linux grabs specific lines, and -1 grabs the first line of the file.  I pipe that with the cut command to grab the second field of the header line, which is the dimension member I need to add to every row.  That gets stored in the HEADERMBR variable for later use.

HEADERMBR=$( head -1 $FILE | cut -d ',' -f2)

The example file above is repeated here.  You can see that the second field is the member and HEADERMBR is set to source_SAP.

Now the AWK command.  Before I jump into it, the AWK looks like this.

awk 'script' filenames

And in the script above, the awk script has the following form.

/pattern/ { actions }

You can also think of pattern as special patterns BEGIN and END.  Therefore, we can write an awk command like the following template.

awk '
     BEGIN { actions }
     /pattern/ { actions }
     /pattern/ { actions }
     ……….
     END { actions }
' filenames

There are also a number of parameters that can be set.

This script starts with a variable.  The -v allows me to create a variable.  The first part of this command creates a variable named var and set it equal to the HEADERMBR value.  I have to do this to use the variable in the script section.

-v var="$HEADERMBR"

The BEGIN identifies the delimiter as a comma and sets the output delimiter to a tab.  FS and OFS are short for Field Separator and Outbound Field Separator.

'BEGIN {FS=","; OFS="\t"}

Since the file has a header file, and I don’t want that in my Essbase load file, I only include the lines greater than 1, or skip the first line.  NR>1 accomplishes that.

NR>1

Gsub allows me the ability to create substitutions.  The source file has quotes around the POV field.  AWK ignores the quotes, so it interprets the field with the start quote and the field with the end quote as a field with a quote in it.  These need to be removed, so the gsub replaces a quote with a blank.  The first parameter is a literal quote so it has to be “escaped” with a /.

gsub(/"/, "");

The next piece is rearranging the columns.  I want to have the second column, or the column with the data, at the end.  I have 8 columns, so I put then in the order of 1, skip 2, 3 through 8, then the variable that was created that has the dimension member in the header line, then 2(the data field).  It looks a little clumsy because I append a quote before and after each field, which is required for the Essbase load format.  But, this is just printing out the fields surrounded by quotes (except for field 2, the data field) and separated by columns.

print "\""$1"\"", "\""$3"\"", "\""$4"\"", "\""$5"\"", "\""$6"\"", "\""$7"\"", "\""$8"\"", "\""var"\"", $2

The last piece is identifying the file I want to do all this work to.

$SOURCEFILE

I want to send the results to a file, not the screen, and the > tells the command to send the results to a new file.

> $LOADFILE

The Result

The outcome is a file that is slightly larger due to the additional quotes and replicating the member from the header in every row, normalizing the file.  It is converted to a tab delimited file rather than a comma delimited file.  The header is removed.  The app name is removed.  And the columns are slightly different as the data column was moved to the end.

That’s A Wrap

I am not ashamed to say this simple, basically one line script, took me forever to build and get to work.  By forever, I don’t mean days, but definitely hours.  That is part of the learning process though, right?  It was still quicker than waiting 6 hours for the file to load!  So now you have basically a one line awk command that converts a Planning load file (or an export from Planning) to an Essbase load file and you can get home to have dinner.




EPMAutomate And Apple – Overcoming Installation Woes

I love my Mac and I am getting really close to not needing my Windows VM. I don’t think Smartview for Office 365 is going to be a replacement for the Windows version in my lifetime, but that is the only thing really making me keep my VM current.

Installation Woes

First, I an not a UNIX guy.  I love some of the functionality in terminal, especially manipulating files.  But I have tried to get EPMAutomate installed and working for a year and just about given up.  Every few months I try again and fail.  For all you Unix/Linux people, I am going to embarrass myself.  For everybody else having the same challenges, I think this will get you over the hump.

Choose Your Poison

You probably know you can run commands in terminal and you probably assume Bash is the default scripting language.  As of Catalina, Apple is using Z shell as the default.  You can change this if you want.  I am not an expert but everything I read, people really like Z shell and prefer it over Bash.

But if you want to change it or see what your default is, open up your system preferences and go to Users and Groups.  Here is a trick.  If you hold Control and click on your profile name, an option for Advanced Options appears.  Click that and you will see the Login Option is probably set to /bin/zsh, which is Z shell.  You can change this to /bin/bash if you want to use bash.

It is important to know the above because you will want to set some environment variable defaults, which I will get to shortly, and you have to know the default to update the appropriate file.

Installing EPMAutomate

First, download EPMAutomate through the normal way to download the utilities.  For cloud users, click on your use name when you log in and you will have a download options  Download the Linux/Mac version.  It doesn’t matter where you download it, but if you download it to your Home directory (the parent of downloads, documents, etc), it is easier to install in that location.  This is my preferred area because my path to the tool is shorter and it is separated from my other content.  The download is a zip file so unzip it to your preferred location.  There will be an epmautomate folder.  Inside that folder will be a bin folder, and the shell command is inside that.

You are finished with the easy part.

Configuring EPMAutomate

This is where I struggled and gave up a number of times.  Thank goodness I finally figured it out because I was really tired of using my VM when I needed to run quick commands to do basic things.  There are a few things that have to be done to get it to work effectively.  First, for me, my java home directory was not set and I edited the epmautomate.sh to include it.  Every time I updated the version, I had to go back in and do it again.  The other frustrating thing was that I couldn’t figure out how to make my default profile include the bin folder noted above in my path.  So, I always had to execute epmautomate with a path or drill into the path to make it work.  I also had to prefix it with a dot to get it to run.

You may have known enough to know you had to set your environment variables.  You may have known that a file existed, or should exist ,named .profile in your home directory.  You may have even known how to edit that file, and that it is hidden (anything with a dot in front is a hidden file).  If you went down this path, tried to update your profile and it didn’t work, here is why.

If your default scripting language is Z shell, or zsh, it looks to a profile in .zprofile, not .profile or .bash_profile!  There are several editors, but the easiest one for me to use is nano.  Open terminal and enter the following command to edit/create the profile

nano .zprofile

Update .zprofile

When you open the file, it will likely be blank.  There are two things we need to add.  First create your JAVA_HOME variable.  It will look something like this.

export JAVA_HOME='/library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home’

If you don’t have a Java SDK installed, download and install the latest JDK.  Once downloaded, type the following into terminal, which will provide the path of the most recent version if multiple versions are installed.

/usr/libexec/java_home

If you want a specific version, add the version to the end of the command, like this.

/usr/libexec/java_home -v 1.8

Next, add the path of where you unzipped EPMAutomate to your path.  This should include the bin directory.  Your path may be different than what I chose.  The following will update the path variable to append your EPMAutomate path.

export PATH=$PATH:~/epmautomate/bin

Your .zprofile will look something like this.

Hold the Control key down and hit X.  This will exit and ask you to save your changes.

Test Your Variables

Close your terminal window and open a new one.  Your profile should load now.  You can check that the variables are loaded by entering the following.  Each should return the variable. The Java_Home should return your java path, and  the Path should include your EPMAutomate directory.

echo $JAVA_HOME
echo $PATH

At this point, if you enter epmautomate.sh, it should return the current version installed.

That’s A Wrap

There you go, use it just like on Windows!  Rather than epmautomate or epmautomate.bat, you enter epmautomate.sh.  Other than that, everything is pretty much the same.  It seems simple, but it took me forever to figure out how to add my variables.  All my script templates are updated and I am loving it!

 




Change The Spread Method In An Existing PBCS Application

Changing application settings was always a little bit of a pain with an on-premise Planning application.  It was a time intensive task of recreating the application, artifact groups at a time.  If you were a little bit of a risk taker, you might have figured out that there were fields in the relational repository that could be updated.  Is there an easy way to do this with PBCS? Since the repository is not accessible in the cloud, legacy methods are not available.  That said, I think it is easier and seemingly less risky with a PBCS application.

How To Perform Configuration Changes

The basic steps are very simple.  If migrations are new to you, take caution and make sure the application backup is readily available.  Always try this in the test environment first.

  1. Run a migration backup by going to the navigation menu and selecting Migration under the Tools header and clicking the Backup button.
  2. Download the backup by changing the view to Snapshots and selecting the ellipse to the right of the created migration and select download.
  3. Unzip the downloaded file to a new folder.
  4. Edit the appropriate file / change the settings (see below).
  5. Zip the files previously unzipped to a new zip file.  Make sure the parent folder is not included.  The folder and files unzipped should be the root of the new zip file.
  6. Upload to zip file created above by going back to Migration under the Tools header in PBCS.  Change the view to Snapshots and click the upload button.  If the backup is too large, you may need to use EPM Automate.
  7. Delete the existing application by moving to the Overview area which can be found in the navigation menu under the Application header, select the Action button and click Remove Application.
  8. After the application is deleted, log back in to PBCS and choose the Migration option in the navigation menu.  Create a new application with the updated zip file by clicking the ellipse and choosing Import.

Updating The Configuration

Before proceeding, I have tried this by only migrating the configuration (not recreating the application) and it didn’t work.  The application had to be recreated so the entire backup was required.  So, although only one file is updated, it is still important to take a full backup.

The migration files hold everything you need to update pieces and parts of an application.  In this situation, the focus will be on the configuration options, specifically the spread method.  Since this is needed to create an application, the spread method is often not decided on and may need to be changed later.  To change it after the fact is pretty easy.  Navigate to and open the Application Definition.xpad file.  This is inside the HP-xxx folder.  The xxx represents the name of the application.  This is a text file so it can be opened in any text editor.  If notepad is used, the line feeds won’t be visible and all the lines will be smashed together.  Notepad++ is a recommended alternative.  See below for the full path of the xpad file.

The Application Definition.xpad file, when opened, should look similar to the following.  This is not a full representation of the file so expect it to be larger when opened.

Scroll down a short way and find the WeeksDistribution property.  It will likely have a 445 pattern or will read Even.  Change this option to the preferred method and save the file.  The options are 445, 454, 544, and Even.

There are, as you see, other options that can changed.  Although I have only changed the spread method, I am moderately confident that the others, if changed here, would be reflected when the zip file is imported.

 <Calendar>
  <BaseTimePeriod>12 Months</BaseTimePeriod>
  <WeeksDistribution>Even</WeeksDistribution>
  <AppStartYear>2015</AppStartYear>
  <FiscalYearStartDate>SameCalendarYear</FiscalYearStartDate>
  <StartMonth>Jan</StartMonth>
  <NumberOfYear>15</NumberOfYear>
  <AllYearsParent>Y</AllYearsParent>
</Calendar>

Note that some of these are not options when the application is created, like whether the application has a parent for years.  I have not tinkered with this for that reason, but it is there if you want try.

Importing A File Too Large For The UI

As stated above, large files cannot be loaded through the PBCS UI.  I have run into this before and believe the maximum size of the file that can be uploaded is less than 2GB.  EPM Automate has an upload file command that overcomes this.

epmautomate login username userpassword https://planning-domain.pbcs.datacenter.oraclecloud.com domain
epmautomate uploadfile "[path]\[filename].zip"

example:  
epmautomate login kgoodfriend GoJackets https://planning-A123345.pbcs.us6.oraclecloud.com A12345
epmautomate uploadfile "c:\backups\PBCS_In2Hyperion.zip"

Finishing Up

There are a lot of useful things you can do with the migration files.  Making changes is sometimes easier in bulk than one artifact at a time.  Many free tools are available to find and replace text in multiple files, and even use patterns.  Some changes require a bit of hunt and peck if they are common and may occur in more places than you want to change.  If you have an application name that is similar to a database name it gets a little more tedious.  Obviously replacing a database name called plan would likely be more work because plan exists in many places, not just a database name.  Here are some thoughts and uses.

  • Changing the application name
  • Changing database names
  • Updating member names in all forms and rules
  • Finding forms and rules that use members to be removed



Adventures in Groovy – Part 38: Challenge Accepted (Exporting Data With The System Date In The Filename)

I had a great question today so challenge accepted!  A reader asked if it was possible to run a data export and have the system date in the file name.  The answer is very simply, yes.  I don’t have any content around the question, so I will answer it in two ways.

Option One – Groovy

If you have read any of the pieces of this series, you know how excited I am about Groovy.  Yeah, I know, Groovy can do everything.  You are tired of hearing me say that.  I know it can’t do everything, but it comes close!  In Exporting Data in PBCS With Business Rules, I talk about exporting data with a business rule.  In Part 1 of my Groovy Series, I lay out how to execute a business rule with Groovy and how you can alter the Essbase calculation at run time.  If we put these two concepts together, we can set the name of the export to anything we want, even a system date.

The Date() method will return a date in a default format and inside a Groovy calculation it uses GMT time.  Since I am in EST, it is always 5 hours ahead.  So, if the date is printed at 5PM EST, the results is Wed Jan 16 22:00:00 GMT 2019.  This can be altered by formatting it to just about anything your heart desires.  I am sure you see where I am going.  If you google “format date Groovy” it will list nearly 50 options for date and time, AM/PM or military time, month as a number, 3 digit abbreviation, or the whole month, as examples.  For this example, the file to be just the system date.

Def currentDate =  new Date().format("MM_dd_yyyy")
// this returns the a two digit month, two digit day, and four digit year, separated by underscores

If you don’t want GMT, there is a trick.  The date method accepts a date.  It can be a fully qualified date like “1/15/19 22:00:00” or you can enter the seconds from January 1, 1970, 00:00:00 GMT.  This might sound crazy but in a Groovy calculation you can use currentTimeMillis() to get exactly that.  Again, since this is GMT, and EST is the want, add 3,600,000 milliseconds for each hour you want to change from GMT.  If these two methods are combined, the appropriate time can be obtained.

def adjEST = currentTimeMillis() + (3600000 * -5)
def currentDate =  new Date(adjEST).format("MM_dd_yyyy")
// This would give me the date 5 hours behind GMT

Now that we have the system date (and this is obviously my interpretation because I want it represented in my home time zone), we can combine this with the examples referenced above to export the data to a file with a name that includes the “system date.”

If you don’t know, the last string in a Groovy calculation is actually submitted to the calculation engine.  A stringbuilder is not required as noted in the examples.  The drawback of this is that we don’t have the option to log the actual calculation script.  This is more helpful when debugging and isn’t always critical to have documented in a production application via a log.  So, here goes!  The only change from the example is that we define a date variable as mm_dd_yyyy and pass that to the calculation.

// Get the date in EST and format as needed
def adjEST = currentTimeMillis() + (3600000 * -5)
def currentDate = new Date(adjEST).format("MM_dd_yyyy")

// Pass the calculation with the currentDate variable as the file name
"""
 SET DATAEXPORTOPTIONS
 {
 DataExportLevel LEVEL0;
 DataExportDynamicCalc OFF;
 DataExportNonExistingBlocks OFF;
 DataExportDecimal 4;
 DataExportPrecision 16;
 DataExportColFormat ON;
 DataExportColHeader Period;
 DataExportDimHeader ON;
 DataExportRelationalFile ON;
 DataExportOverwriteFile ON;
 DataExportDryRun OFF;
 }

 FIX(@Relative("Account", 0),
     @Relative("Years", 0),
     @Relative("Scenario", 0),
     @Relative("Version", 0),
     @Relative("Entity", 0),
     @Relative("Period", 0),
     @Relative("<em>custom_dim_name_1</em>", 0),
     @Relative("<em>custom_dim_name_1</em>", 0),
     @Relative("<em>custom_dim_name_1</em>", 0))
  DATAEXPORT "File" "," "/u03/lcm/${currentDate}.txt" "";
 ENDFIX
"""

This would produce a file in the inbox named 01_16_2019.txt.  By adding a parameter and formatting it to the date we want the file to be named, we have our solution.  Two additional lines and adding the parameter to the Essbase calculation is all that is required.

Part Two – Without Groovy

This is a little more difficult without Groovy.  I have not found a way to use something like the following as the file name.  If a reader sees this and knows of a way a way to do this, please share it and post a comment.

@CONCATENATE (@CONCATENATE ("/u03/lcm/", @FORMATDATE(@TODAY(),"mm_dd_yyyy")),".txt")

I assume in the original question that the ask is really not to have a file in the inbox with a specific name.  By using EPMAutomate and execute the calculation with a static name, the file can be download and renamed.  A simple PowerShell script could be used to rename the file.  If there really is a need to have this in the inbox, it can be uploaded.  I don’t know of any method that exists to rename an existing file with EPMAutomate.

$localFile = "c:\PBCS_Downloads\"
$exportFileName = "DataExport.txt"
Start-Process "epmautomate downloadfile ""$exportFileName"""
# Assume the script is executed from c:\PBCS_Downloads\ or the file is moved to this folder
$fileObj = get-item "c:\PBCS_Downloads\dataexport.txt"
rename-item "$($fileObj.FullName)" "$(Get-Date -Format "MM_dd_yyyy")$($fileObj.Extension)"
Start-Process "epmautomate uploadfile ""$($fileObj.DirectoryName)\$(Get-Date -Format ""MM_dd_yyyy"")$($fileObj.Extension)"""

Challenge Completed

Another challenge accepted has been easily completed.  Keep them coming!




Planning Data Loads: com.hyperion.planning.InvalidMemberException vs. java.lang.RuntimeException

I had a very interesting thing happen today that tripped me up.  When loading data to a PBCS database through Planning (not as an Essbase file), I had two types of errors.  I have never seen this before and this could cause some serious heartburn for administrators and possibly waste a ton of time trying to resolve.  So, I am sharing for the sake of saving somebody some effort.

One Error, Two Messages

The error first is the typical error due to a member not being in the application.  com.hyperion.planning.InvalidMemberException: The member CTR_123 does not exist for the specified cube or you do not have access to it.  We have all seen this.  CTR_123 was not in the hierarchy.  Once it was added and the data was reloaded the issue was resolved.

The second issue was another error I have seen before, but I haven’t seen this in PBCS.  java.lang.RuntimeException: Not all dimensions were specified.  Normally, this is related to having a file correctly formatted in but having a member from one dimension in the wrong column, or having a column that is null.

As I often do, I created a Smart View retrieve and added the members in the load file one at a time.  When I found the member that caused a retrieve error, I went into the dimension editor to search for it.  To my surprise, it was there!  What?  But, when I looked at the properties, it was not valid for the application I was trying to load the data to.  This might have been overlooked by mere mortal (wink), but once enabled for the application in question, the load error was resolved.

Conclusion

So, why the two error types?  Why do we get two error types for the same error (the member doesn’t exist in the database)?  I can only assume since I loaded this through Planning, it tripped up on the fact that the member was in Planning, but not in the specific database I was trying to load.  If I loaded this as an Essbase file, as expected, I got the same error for both lines, member not found.

Hopefully this saves you some time.  If you have ever come across something similar, please share with the community.  These things are normally the things you find after a 12 hour day and you spend another 4 trying to figure it out.

Lastly, please enjoy a safe holiday and remind yourself how lucky you are and try to be thankful for the things you have and not be frustrated about the things you don’t.  Gobble Gobble!




Use Valid Intersections To Bypass Smart Push Security Errors

If you have used Smart Pushes, you have undoubtedly run into security issues.  The PBCS development team is working on a way to bypass this, but there is no release date.  If you haven’t run into this, you probably have and didn’t know it.  Here is the issue. Dimensional security is typically setup so that users only have access to write to specific accounts.  For example, they have access to update units and cost per unit, but not revenue because revenue is calculated by multiplying units and cost per unit.  The problem is that a Smart Push won’t push revenue because the user doesn’t have access to write to that account.  Said another way, when users update the components of revenue, the revenue itself doesn’t get pushed to the reporting cube because the user doesn’t have write access.  Here in lies the problem, as revenue won’t get updated until an administrator runs a Data Map.

I understand why Oracle did this, but with no ability to override it, it creates issues with Smart Pushes and real time reporting.

Before I continue, I want to thank Pete over at Essbase Down Under for this brilliant idea.  Enter stage left, valid intersections!

What Are Valid Intersections?

Normally, valid intersections are used to link drop down menus.  For example, once Honda is selected in the first menu, the second menu only shows the models related to Honda, like Accord, Civic, and Prelude.  If you know what a Prelude is, you have been doing this as long as I have!  Valid intersections can also limit what users have access to write too, like the rows a user has access to write to in a grid.  If valid intersections are setup on account and accounts are in the rows of a Data Form, the grid will show all the accounts, but the invalid accounts are greyed out and not editable.  Since this isn’t truly security, when the Smart Push is executed, the user has the access to push calculated accounts.

Why Not Cherry Pick Rows As Read Only?

Sure, rows can be set to read only.  If you are thinking about going down this path, beware!  There are some huge issues with this approach.

  1. As hierarchies change, forms will have to be manually changed.  By singling out specific rows, it means administrators lose the benefit of functions, like level 0 descendants.  So, rows will have to be manually maintained every time metadata changes.
  2. Since this is a form setting, users who run ad-hoc retrieves can still write data to these accounts.

Valid Intersections In Action

You might be thinking, yeah, but valid intersections also use level 0 descendants?  They do, but just like in forms, exclusions can be applied.

In the example below, the rows/accounts with the check marks have been setup to be valid with the working scenarios.

In this example, the dimensional security is set to write for all accounts on this form.  As visible below, the dimensional security is ignored.  Users have the access to the accounts needed, but can’t edit those that aren’t setup as valid intersections (selected in the valid intersections setup above).

When the form is saved, and the Smart Push executes, the Smart Push uses dimensional security.  All the data pushes since the users have write access to all accounts!

What About Ad-hoc?

Don’t worry, users can’t change data on the invalid intersections.  Well, sort of.  The rows for the invalid accounts are still greyed out.  Here is where it gets blurry.  Users can still edit the data in those cells, the ones that are grey.  Although this might be confusing to a user, it is ignored on submission.  Below shows the accounts in the above form, but in an ad-hoc.  Notice what happens when the first two rows are changed.  The first row is saved, but the second is not.  It returns to its original value.

Finishing Up

I wish there was a parameter in the Smart Pushes (Groovy or not) that allowed the form developer to set the permissions to use admin privileges.  Until this gets figured out with Oracle, this is a great way to fix the problem.  Everybody that uses Smart Push will run into this.  Setup dimensional security to write to all accounts.  Setup valid intersections for the accounts that users need to be able to edit.  And wallah!