1

Getting The Most From Your Oracle Support

Nobody will argue that Oracle Support is a word class experience.  Like it or not, the service is here to support thousands and individuals can get lost in the process.  If you navigate the process correctly, you can get results.

What Do Do First

Your first step should be to troubleshoot the issue.  Do everything you can for support.  The more you provide, the less “did you plug in the computer” type questions that delay you getting help  you will get.  Be clear and concise.  The first few levels of support are likely less experienced than you and will only be able to provide novice level help most of the time.

Next, make sure the content of your support ticket is complete.  Don’t leave any questions.  What I have found to be extremely helpful is to video the issue.  Make sure you show the problem and walk through the calculation or script or whatever is the source of the issue.  This doesn’t have to be a professional video.  There are free screen recording applications.  There are also some pretty inexpensive ones.  If you use a Mac, ScreenFlick is 35 bucks and well work it.  It gives you the basics.  If you use SnagIt, it also works just fine.

If you are a Windows user, there is a nice free option, but you probably haven’t noticed it, and for good reasons.  If you look through the Start menu, you won’t find any reference to a screen recorder. You’re not missing it… it simply isn’t there. That’s because rather than making the screen recorder a standalone app, Microsoft included it as a feature of the Xbox Game Bar in Windows 10.

Your Second Step

After you open a ticket and provide your video of the issue, proving there is an issue without a doubt, follow these steps

  • Make sure you get a duty manager assigned and push for escalation
  • Call 1-800-223-1711
  • Press 1 and enter your SR number
  • Press 2 to speak with a manager
  • Do not press 1 to speak with Engineer, this will likely result in going straight to Voicemail
  • Tell the Support Hub the critical impact you are facing
  • Request a Duty Manager callback ASAP
  • Repeat request level 2, then level 3, to level n, until you get what you want
  • Call hourly until you get what you want – the squeaky wheel gets the grease

During this process, here are some things make sure you get the most out of the experience.

  • Speak with the support hub manager
  • Describe the impact, include key project milestones at risk and communicate what you need for progress
  • If your issue is CRITICAL you should ALWAYS make a request for a duty manager call back.
  • If you ask for a duty manager call back, wait 30-60 minutes until you receive the call and reiterate the problem
  • Be strategic with your request
  • Don’t just say you need to escalate – describe the impact and communicate what you need for progress
  • Keep in mind, your SR will NOT appear in an “escalated” state unless you negotiate this with a duty manager

That’s A Wrap

Should you have to do all this?  No.  The experience should be better, but yelling about it and banging your fists on the table won’t change anything.  You have to play the game and go through the process.  If you use these tactics, I guarantee you will improve the response.

 

 




Convert a Planning Load File to an Essbase Load File

There are a ton of reasons to convert a planning load file to an Essbase load file.  Maybe you are migrating a file from one environment to another, or simple want to load the file faster, but there are reasons to use the Essbase format.

Oracle is working on an enhancement that should be released in the next month or two that will use a load rule to load data to the app using the Essbase load format, which means the logging will be much improved, it won’t stop at the first failed line, and it will log all the errors, just like the Planning load format.  That is great news for those of us that use the planning format purely for the logging.

Performance

Before I get into the script, I want to touch on the speed of this method.  The file I used, based on a real situation, was over 89 million lines (yes, that is correct, million), and took over 5 hours to load as a Planning file.  It had to be split into three files to be under the 2GB limit, but it successfully loaded.  The file was received late in the morning and had to be loaded before the start of the day, so a 5 to 6-hour processing time was unacceptable.  By the way, yes, the file was sorted appropriately based on the sparse and dense settings.

I was able to build a unix/linux script using awk to convert this file to an Essbase load format and it only took about 9 minutes to convert.  The improved load time was pretty drastic.  It finished in under 15 minutes.

For testing, it was great, and it was perfect to improve the processing until the source system could rebuild the export in the Essbase format.  Just to reiterate, I added less than 10 minutes to convert the file, and reduced the load time by 4.5 hours, so it was worth the effort.

The Catch

Before I continue, if you are unfamiliar as to why the two load formats, here is the difference.  Essbase loads the data directly to Essbase.  The Planning load will bounce the file off the Planning repository to convert any smart list string account to the appropriate number, which is what is stored in Essbase.  This process creates a new file on the server, in an Essbase load format, with the numeric representation of each smart list account.  If you have no smart list conversions, this entire process is done for no reason, which was the case in this situation.  So, this isn’t the answer in every situation.

The Script

Before I get into the script, if you know me you know I love my Mac.  One of the reasons is because I have the performance of a Mac, I can obviously can run Windows whenever I want, and I have the ability to run Bash scripts through the terminal.  I am not a Bash scripting expert, but it is extremely powerful for things like this, and I am learning more as I need to build out functionality.

If you are a Windows user, you can install and use Linux Bash scripting in Windows 10.  You can read about it here.

There are several languages that can be used, but I chose AWK, which is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. It is a standard feature of most Unix-like operating systems.

First the script.  Here is it.  I put the awk on multiple lines so it was a little more readable, but this is one command.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";
HEADERMBR=$(head -1 $FILE | cut -d ',' -f2)

awk -v var="$HEADERMBR" 
     'BEGIN {FS=","; OFS="\t"} 
     NR>1 
     {gsub(/"/, ""); 
          print "\""$1"\"", "\""$3"\"", "\""$4"\"", 
                "\""$5"\"", "\""$6"\"", "\""$7"\"", 
                "\""$8"\"", "\""var"\"", $2}' 
     $SOURCEFILE > $LOADFILE;

There are a few things you will need to change to get this to work.  Update the source file and the load file to reflect the file to be converted, and the file name of the converted file, respectfully.  Inside the AWK script, I have 8 fields, 1 through 8.  This represents the 8 columns in my Planning file, or the dimensions and the data.  Your file might have a different count of dimensions.  If your file has more or less delimited columns (ignore the POV field quotes and assume that each delimited field in that is an additional field), update the script as needed

In this example is a planning file example and each arrow represents a field.  The print section of the awk command changes the column order to fit what the Essbase load format requires.

Breaking down AWK

This won’t teach you everything there is to know about AWK, as I am still learning it, but it will explain the pieces used in this command so you can get started.

This piece is simply creating two variables, the source file and the converted file name, so there aren’t multiple places to make these changes when the script needs updated.

SOURCEFILE="Data.csv";
LOADFILE="DataLoad.csv";

The head command in Linux grabs specific lines, and -1 grabs the first line of the file.  I pipe that with the cut command to grab the second field of the header line, which is the dimension member I need to add to every row.  That gets stored in the HEADERMBR variable for later use.

HEADERMBR=$( head -1 $FILE | cut -d ',' -f2)

The example file above is repeated here.  You can see that the second field is the member and HEADERMBR is set to source_SAP.

Now the AWK command.  Before I jump into it, the AWK looks like this.

awk 'script' filenames

And in the script above, the awk script has the following form.

/pattern/ { actions }

You can also think of pattern as special patterns BEGIN and END.  Therefore, we can write an awk command like the following template.

awk '
     BEGIN { actions }
     /pattern/ { actions }
     /pattern/ { actions }
     ……….
     END { actions }
' filenames

There are also a number of parameters that can be set.

This script starts with a variable.  The -v allows me to create a variable.  The first part of this command creates a variable named var and set it equal to the HEADERMBR value.  I have to do this to use the variable in the script section.

-v var="$HEADERMBR"

The BEGIN identifies the delimiter as a comma and sets the output delimiter to a tab.  FS and OFS are short for Field Separator and Outbound Field Separator.

'BEGIN {FS=","; OFS="\t"}

Since the file has a header file, and I don’t want that in my Essbase load file, I only include the lines greater than 1, or skip the first line.  NR>1 accomplishes that.

NR>1

Gsub allows me the ability to create substitutions.  The source file has quotes around the POV field.  AWK ignores the quotes, so it interprets the field with the start quote and the field with the end quote as a field with a quote in it.  These need to be removed, so the gsub replaces a quote with a blank.  The first parameter is a literal quote so it has to be “escaped” with a /.

gsub(/"/, "");

The next piece is rearranging the columns.  I want to have the second column, or the column with the data, at the end.  I have 8 columns, so I put then in the order of 1, skip 2, 3 through 8, then the variable that was created that has the dimension member in the header line, then 2(the data field).  It looks a little clumsy because I append a quote before and after each field, which is required for the Essbase load format.  But, this is just printing out the fields surrounded by quotes (except for field 2, the data field) and separated by columns.

print "\""$1"\"", "\""$3"\"", "\""$4"\"", "\""$5"\"", "\""$6"\"", "\""$7"\"", "\""$8"\"", "\""var"\"", $2

The last piece is identifying the file I want to do all this work to.

$SOURCEFILE

I want to send the results to a file, not the screen, and the > tells the command to send the results to a new file.

> $LOADFILE

The Result

The outcome is a file that is slightly larger due to the additional quotes and replicating the member from the header in every row, normalizing the file.  It is converted to a tab delimited file rather than a comma delimited file.  The header is removed.  The app name is removed.  And the columns are slightly different as the data column was moved to the end.

That’s A Wrap

I am not ashamed to say this simple, basically one line script, took me forever to build and get to work.  By forever, I don’t mean days, but definitely hours.  That is part of the learning process though, right?  It was still quicker than waiting 6 hours for the file to load!  So now you have basically a one line awk command that converts a Planning load file (or an export from Planning) to an Essbase load file and you can get home to have dinner.




EPMAutomate And Apple – Overcoming Installation Woes

I love my Mac and I am getting really close to not needing my Windows VM. I don’t think Smartview for Office 365 is going to be a replacement for the Windows version in my lifetime, but that is the only thing really making me keep my VM current.

Installation Woes

First, I an not a UNIX guy.  I love some of the functionality in terminal, especially manipulating files.  But I have tried to get EPMAutomate installed and working for a year and just about given up.  Every few months I try again and fail.  For all you Unix/Linux people, I am going to embarrass myself.  For everybody else having the same challenges, I think this will get you over the hump.

Choose Your Poison

You probably know you can run commands in terminal and you probably assume Bash is the default scripting language.  As of Catalina, Apple is using Z shell as the default.  You can change this if you want.  I am not an expert but everything I read, people really like Z shell and prefer it over Bash.

But if you want to change it or see what your default is, open up your system preferences and go to Users and Groups.  Here is a trick.  If you hold Control and click on your profile name, an option for Advanced Options appears.  Click that and you will see the Login Option is probably set to /bin/zsh, which is Z shell.  You can change this to /bin/bash if you want to use bash.

It is important to know the above because you will want to set some environment variable defaults, which I will get to shortly, and you have to know the default to update the appropriate file.

Installing EPMAutomate

First, download EPMAutomate through the normal way to download the utilities.  For cloud users, click on your use name when you log in and you will have a download options  Download the Linux/Mac version.  It doesn’t matter where you download it, but if you download it to your Home directory (the parent of downloads, documents, etc), it is easier to install in that location.  This is my preferred area because my path to the tool is shorter and it is separated from my other content.  The download is a zip file so unzip it to your preferred location.  There will be an epmautomate folder.  Inside that folder will be a bin folder, and the shell command is inside that.

You are finished with the easy part.

Configuring EPMAutomate

This is where I struggled and gave up a number of times.  Thank goodness I finally figured it out because I was really tired of using my VM when I needed to run quick commands to do basic things.  There are a few things that have to be done to get it to work effectively.  First, for me, my java home directory was not set and I edited the epmautomate.sh to include it.  Every time I updated the version, I had to go back in and do it again.  The other frustrating thing was that I couldn’t figure out how to make my default profile include the bin folder noted above in my path.  So, I always had to execute epmautomate with a path or drill into the path to make it work.  I also had to prefix it with a dot to get it to run.

You may have known enough to know you had to set your environment variables.  You may have known that a file existed, or should exist ,named .profile in your home directory.  You may have even known how to edit that file, and that it is hidden (anything with a dot in front is a hidden file).  If you went down this path, tried to update your profile and it didn’t work, here is why.

If your default scripting language is Z shell, or zsh, it looks to a profile in .zprofile, not .profile or .bash_profile!  There are several editors, but the easiest one for me to use is nano.  Open terminal and enter the following command to edit/create the profile

nano .zprofile

Update .zprofile

When you open the file, it will likely be blank.  There are two things we need to add.  First create your JAVA_HOME variable.  It will look something like this.

export JAVA_HOME='/library/Java/JavaVirtualMachines/jdk-13.0.2.jdk/Contents/Home’

If you don’t have a Java SDK installed, download and install the latest JDK.  Once downloaded, type the following into terminal, which will provide the path of the most recent version if multiple versions are installed.

/usr/libexec/java_home

If you want a specific version, add the version to the end of the command, like this.

/usr/libexec/java_home -v 1.8

Next, add the path of where you unzipped EPMAutomate to your path.  This should include the bin directory.  Your path may be different than what I chose.  The following will update the path variable to append your EPMAutomate path.

export PATH=$PATH:~/epmautomate/bin

Your .zprofile will look something like this.

Hold the Control key down and hit X.  This will exit and ask you to save your changes.

Test Your Variables

Close your terminal window and open a new one.  Your profile should load now.  You can check that the variables are loaded by entering the following.  Each should return the variable. The Java_Home should return your java path, and  the Path should include your EPMAutomate directory.

echo $JAVA_HOME
echo $PATH

At this point, if you enter epmautomate.sh, it should return the current version installed.

That’s A Wrap

There you go, use it just like on Windows!  Rather than epmautomate or epmautomate.bat, you enter epmautomate.sh.  Other than that, everything is pretty much the same.  It seems simple, but it took me forever to figure out how to add my variables.  All my script templates are updated and I am loving it!

 




Hybrid Planning / Essbase Gotchas

Having the best of both worlds, ASO and BSO, doesn’t come without some gotchas.  Before you jump in with both feet, beware of some things that are not supported in hybrid.  As of Friday, May 22, 2020, @ISMBR in planning does NOT work.  I don’t know if this is a bug, but it is not documented as a function that doesn’t exist.  What is documented is the following.  There isn’t a ton in this post, but I thought it would be beneficial to share this as a warning, as well as an easy way to find the list. If you find more things that don’t work, please share with the community.

  • @ACCUM
  • @ALLOCATE
  • @ANCEST
  • @ANCESTVAL
  • @AVGRANGE
  • @COMPOUND
  • @COMPOUNDGROWTH
  • @CORRELATION
  • @CREATEBLOCK
  • @CURRMBR
  • @CURRMBRRANGE
  • @DECLINE
  • @DISCOUNT
  • @GROWTH
  • @INTEREST
  • @IRR
  • @IRREX
  • @MDALLOCATE
  • @MDANCESTVAL
  • @MDPARENTVAL
  • @MDSHIFT
  • @MEMBER
  • @MOVAVG
  • @MOVMAX
  • @MOVMED
  • @MOVMIN
  • @MOVSUM
  • @MOVSUMX
  • @NPV
  • @PARENT
  • @PARENTVAL
  • @PTD
  • @SANCESTVAL
  • @SHIFT
  • @SLN
  • @SPLINE
  • @STDEV
  • @STDEVP
  • @STDEVRANGE
  • @SYD
  • @TREND
  • @XRANGE
  • @XREF
  • @XWRITE