Tuesday, September 1, 2015

Some Essbase and Planning stuff

This is a continuous post which will be updated with questions as and when they come to my mind.

Q: Can I use Dynamic calc members in calc script
- In a calculation script, you cannot calculate a dynamically calculated member or make a dynamically calculated member the target of a formula calculation

- Dynamic calc members CAN be used in scripts as source. But, using dynamically calculated members in a formula on a database outline or in a calculation script can significantly affect calculation performance. Performance is affected because Essbase interrupts the regular calculation to perform the dynamic calculation.

Q: Various places where createblockoneq can be done
Create blocks on equation can be enabled at
EAS
MAXL
ESSCMD
Calculation script

Q: What is the idea behind FIXING on dense member and having a sparse member equation on sparse member block for block creation ?
First point to be noted: Essbase numbers all indexes, whether or not a block exists for it or not. at the time of restructure. That is also a way to get the potential number of blocks figure.
Second point : A FIX is basically an OR of sets ex. FIX (A, B, C) == FIX(A OR B OR C)
A Fix statement on  ONLY sparse members, searches for indexes and brings all the members for which blocks are existing in memory, that is, blocks where page file entries are present. Note that you haven't asked for any dense member explicitly.
When a sparse equation is written in calculation, only those targets for which blocks exist will be calculated. If any source is absent, it will give incorrect results.

When a dense member is fixed along with sparse members, essbase has to bring all the combinations of sparse in the fix along with the dense members in it. Since dense member is not brought individually, the whole block is brought into memory for calculation.
Sparse equation forces the calculation to take place on all possible combinations in the fix, whether or not block exists. The 'set create block on equation' forces essbase to create a block when Essbase encounters a scenario where it knows it is doing a calculation but has no placeholder for it. Hence calculation is performed and block is created.
Why only a non-constant value is required when writing equations ?  Reason being CREATEBLOCKONEQ is designed to create blocks only when assigning non constant values.
You can limit the block creation by writing conditional blocks.
e.g
SET CREATEBLOCKONEQ ON
FIX(Sparse and a dense member)
Budget
(
    IF(Actual->Sales != #MISSING) /*Budget block created only if actual data exists*/
         Budget = Actual;
    ENDIF
)
This technique is slower than DATACOPY and must be used properly.

Q: What is a calculation pass:
A pass is when blocks are sent to memory for calculation. Essbase might sometimes need to calculate value of a member more than once. That means it has to bring the same blocks in memory again to calculate. Though it tries its best to do all calculation in one pass only, certain conditions might require it to do it twice, e.g. Percentage or rate or per unit price calculation at parent members.  
When two dimensions are tagged as account and time, and when both are dense,  Essbase calculates everything in one pass. (Provided Two pass calc is set to off and no member is set to two pass calc)
When either one of them is sparse, Essbase does calculates in two passes.

Q: What is calculation method:
Calculation mode can either be top down or bottom up. Default is bottom up. That means essbase goes on calculating members in order in which they are arranged in the outline. However, if there are complex formulas with business functions or cross dimensional operators in the script which requires parent members to be aggregated or some other member combination, essbase switches to top-down mode automatically. This can be explicitly changed using CALCMODE BLOCK/TOPDOWN option in calc scripts or in essbase.cfg file.
“Calculation Passes” on page 404"
Essbase evaluates a calc script to find complex formulas (Cross Dim operator or Top members e.g YearTotal->AllStores), in that case Essbase switches to
CALCMODE Top Down. By default it is Bottom Up.


Full viewQ: What are the two calculation modes in terms of threads ? 
Regardless of how a calculation is triggered, Essbase can execute the calculation in one of two
modes: 
- Serial calculation is the default. With serial calculation, each calculation pass is scheduled to
run on a single processor. If invoked from a calculation script, the calculations are executed
sequentially in the order in which they appear in the calculation script.
- Parallel calculation breaks each calculation pass into sub-tasks. The sub-tasks that can run
independently of one another are scheduled to run simultaneously on up to 64 or 128
threads. Block storage databases running on 32-bit platforms support up to 64 threads. Block
storage databases running on 64-bit platforms and aggregate storage databases (whether
running on 32-bit or 64-bit platforms) support up to 128 threads. Each thread may be on a
different processor.

Essbase reads the script to evaluate if parallel calc is possible, if not, it will execute serially even if CALCPARALLEL is set

To change from the default serial calculation to parallel calculation, change, at most, two
configuration settings and restart the server, or add an instruction to the calculation script.
See “Enabling Parallel Calculation” on page 880.

- When you calculate a database, Essbase automatically displays the calculation order of the
dimensions for each pass through the database and tells you how many times Essbase has cycled
through the database during the calculation. Essbase displays this information in the ESSCMD
window and in the application log.
For each data block, Essbase decides whether to do a dense or a sparse calculation. The type of
calculation it chooses depends on the type of values within the data block. When you run a
default calculation (CALC ALL) on a database, each block is processed in order, according to its
block number.

Q: If Member A is
                A             (Formula A= P+Q*R)
                                P +
                                Q +
                                R +
And we apply CALCDIM (A); then what is the result ?
I think Formula is applied

Q: Difference between Dynamic and Dynamic calc and store ? When is Dynamic calc and store needed ?
Dynamic calc and store values are stored once calculated dynamically, subsequent retrievals need not require calculations.
Essbase marks blocks for recalculation when
1. Calc scripts are run
2. Database restructured
3. CLEARBLOCK DYNAMIC command executed
Upon next retrieval, the Dynamic calc and store block is recalculated and stored again.
Essbase DOES NOT mark blocks for recalc when data is updated, say when Dataload or lock and send happens on dependant cells. The retrieval in this case will still show the old value, until another batch calc is executed and datablock is 'retrieved'
Hence it is recommended to use CLEARBLOCK DYNAMIC command after a dataload.
Q: What is the benefit of this ? In Implicit retrievals
Ex.
Market (Dynamic)
                East +(Dynamic Calc and Store)
                                X
                                Y
                                Z...
                               
                West +(Dynamic Calc and Store)
                                P
                                Q
                                R...
                               
When 'Market' value is retrieved, East and West values are CALCULATED but not STORED. So, East and West values are calculated and stored only when
CLEARBLOCK DYNAMIC command is executed and East and West are explicitly retrieved
As a recommendation. Do not tag dense members as Dynamic calc and store.
Both type of members cannot be used as targets (LHS) in a calc script

During a calc all or calc dim, calculation of Dynamic and Dynamic calc and store members is bypassed

If you add a Dynamic member in outline, restructure is not required and will not have any effect.

You can tag any member as Dynamic Calc or Dynamic Calc and Store, except the following members:
Level 0 members that do not have a formula
Label-only members
Shared members

Using Dynamic Calc and Store may slow initial retrieval; however, subsequent retrievals are faster than for Dynamic Calc members. Use Dynamic Calc and Store instead of Dynamic Calc for the following members:
An upper-level sparse dimension member with children on a remote database.
Essbase must retrieve the value from the remote database, which increases retrieval time. See Dynamically Calculating Data in Partitions.
A sparse dimension member with a complex formula.
A complex formula requires Essbase to perform an expensive calculation. Any formula that contains a financial function or a cross-dimensional member is a complex formula.
If users frequently retrieve an upper-level member of a sparse dimension, speedy retrieval is important.





Q: Design Recommendations for outline
Hourglass model
Use Dynamic calc for dense members, at parent level or level 0 with simple formulae
Use Dynamic calc and store for sparse members at parent level and members with complex member formula.
Use Label only wherever possible


Q: What are the different types of variables used in Essbase, Planning.

Substitution variables 
Only a text supported with no association with dimension. The variable gets replaced by the value (text) it holds at run time. This variable is also seen in Essbase EAS

User variables
User specific variables have to be associated with Dimension, hence must hold only the dimension member names as values. They cannot hold any text. These can be used in Webforms for user specific dimension member selection.

Variables as Run time prompts 
Enclosed in square brackets
Two types in this as well: Replacement (RTP) and Execution
RTPs are also associated with dimensions.
Variables can be at global  level, Application level or database level.

Q: What is the precedence or RTPs if two variables with same name at two levels ?

Q: What is the difference between ASO and BSO, elaborate on it.


Q: What is security precedence in Planning. 
Precedence issues can occur when
- user is part of two groups having different access to the same object.
- user is given direct access to an object or having an inherited access through group members.
In such cases the thumb rule is : None precedes read and write, and write precedes read. Individual access overrides inherited access


Q: How is ODI variable declared and used as run time prompt
Create a variable and drag it in package, then select the 'Declare variable ' option in variable and generate a scenario. This will prompt the variable to be used, select the option. When scenario is executed, it will prompt for a value. This can be done through command prompt as well.

Q: UDA and attribute difference:

UDA is a text attribute to identify a peculiarity associated with the member, this is limited only to text format and cannot be rolled-up or used for cross-dimensional reporting. It also cannot be arranged in a hierarchical manner.
E.g.: A text uda of Currency can be attached to an entity. Retrievals can be done using the UDA
Attribute: Same definition: identifying distinct characteristics of the member with additional functionality; Used for cross dimensional reporting, arranged in hierarchy, can be of text, date, or boolean.
E.g.: An entity dimension can be associated with attribute 'Tier': Tier 1, Tier 2, Tier3
Data can be rolled up according to Tiers.
Can the same functionality achieved using Alternate rollups:
Yes, but maintaining hierarchy and having cross dimensional reporting cannot be achieved.
Technical: Attributes can be associated only to sparse base dimensions. And the members SUM, AVG etc are dynamically calculated


Give a functional example of UDA and Attribute 
UDA as the name suggests is a text attribute to classify a member based on a peculiarity. Attributes are also the same but with additional functionalities, Attributes can be classified based on Boolean values, date, text and number.
UDA can be used when we dont want to roll up data based on the classifications eg. Currency assigned to country dimension. We will not roll up data based on currency.
Same country dimension can have an attribute based on population, or a product dimension having elaborate classifications which can be rolled up to parent.

How does tagging account and time dimension help 
Once a dimension is tagged as account or time, these dimensions are rolled up first irrespective of their position in the hour glass model and their sparsity or density.
Then, account and time can have more properties that can be used for additional functional purpose.
First: Time balance property: TB First, TB Last, TB Flow , Average, Fill etc.
Skip (optional on selection of TB property)
Second : Variance reporting

Time Dimension : Dynamic time series for period to date values like Y-T-D, H-T-D

What happens when Aggregate missing ON is set 
Say a hierarchy is like this,
Parent
     A
     B
     C
If A, B and C have values #MISSING,
When we set AGG missing to off, the parent is aggregated only when there is at least one child having a non missing value. This does not help in dense aggregations but a huge improvement when aggregations work on sparse.


Are there any restrictions on using specific member names ?
Yes, Some times, keywords can be accidentally used as member names which can cause problems while application refresh or LCM export or Import.
As an example, using 'Count' or 'Date' as member names can create problem if we decide to use attribute dimensions with the same name later on.

Can we use Dynamic time series on custom period dimensions ?
Yes, make sure 'Year' dimension is named as 'Years' for it to work properly



    

Wednesday, July 22, 2015

Essbase Hourglass Model, Why ?

Ever wondered why people in Essbase kingdom emphasize that the outline be arranged in 'Hourglass' manner?
There is no exact answer for the same, most of them emerging from debates and discussions with experts and some others in interviews. The is no document which pinpoints the correct answer for the same.
If Block calculation order in essbase dbag is anything to go by, I think we might just be close to the right answer, albeit only half of it.

So what does the Dbag say ?
It says that essbase numbers blocks sequentially. Sequentially, like 0,1,2,3.....
The numbering starts as soon as essbase encounters first sparse member in the outline starting from the top of the outline. The numbering is done irrespective of whether the block is created or not. That is how we get the total number of potential blocks in any cube.
Take an example of the standard Sample.Basic application. The hierarchy is arranged as

Account (Dense)
Period (Dense)
Product (Sparse)
Market (Sparse)

For the sake of convenience, lets say Product has three members and Market has 10 members

Product
P1   P2   P3
Market
M1   M2  M3  M4 .................    M10

So the block numbering starts like:
0 : P1 -> M1
1 : P2 -> M1
2 : P3 -> M1
3 : P1-> M2
4 : P2 -> M2
5 : P3 -> M2
6 : P1 -> M3
7 : P2 ->M3
8 : P3 -> M3
......

And so on.

Now, whenever a calculation takes place, blocks are brought into memory by always starting the count from zero. Be it calc all, calc dim or fix, essbase will always bring blocks in a sequence decided by the index. How parallel calculation takes place is something that we need to research further. A sparse restructure will always renumber the indexes if need be.

Now consider a case when data exists for M1 and M3, but not for M2. This is how my index file would look in Hourglass as well as opposite of hourglass
Looking at the table above, you can figure out that blocks of M1 and M3 were brought into memory much sooner in 'Hourglass' model than the other one. 

Consider a scenario where we would put a Fix statement on Product and Market, Say P3 and M3, the result is found earlier in hourglass mode than its opposite.
All in all, we can find some answers as to why it is beneficial to arrange sparse dimensions in hourglass model.
How the same helps for Dense dimension ?
to be continued ....

Wednesday, June 10, 2015

Essbase Performance Improvement: Basic tips

Recently we installed EPM 11.1.2.1 on a spanking new server having Windows 2008 R2. Although, the first question would be why 11.1.2.1 and why 2008 R2 in this age of 2.4 ? we must understand that money makes the world go round and there are clients who don't feel the need to upgrade unless it is absolutely necessary.

Things looked good initially, but soon we found that there are some serious performance issues with the new Essbase server. Following are the areas we attacked to improve its performance.

Server power mode should be set to High-Performance: 

A text data export was taking around 30 minutes on the new server (on old ones, the exact file would export in 4 minutes). Changing the power mode from balanced to High power, reduced it to 22 minutes. Still pretty slow.

Priority of ESSSVR process set to 'Below Normal'

This is still a mystery, but we are still trying to figure out why ESSSVR would spawn in 'Below Normal' priority. An SR with Oracle didn't yield expected results.
We then checked the automated script which starts Essbase upon server startup. By default, all tasks started in Windows 2008 are started on 'Below Normal' priority. We have to manually export the task's XML file, increase the priority and set it to 'Normal'.

Drive Speed

The disk write speed on which the app folder is was abysmal. We changed to folder location in Essbase to another volume, which is a SAN drive. It is pretty much faster than the drive attached to the server.

RAM increased

From 32 GB, the RAM was increased to 128 GB. There is a lot of free memory available now.


MEMSCALINGFACTOR

This is one secret weapon most of us ignore. This parameter increases the allocated RAM for Data Cache, Index Cache and Data File Cache beyond the maximum value of 4 GB that can be set through EAS console.

These things have brought the new server at par with the old one, infact, in certain cases it outshines the older one by a good margin.

Cheers !
NJ

Thursday, May 28, 2015

Member selection issues with ASO cube and EPM 11.1.2.1 Patch 106

Since I have nothing great to do, I have been assigned with a task of installing 11.1.2.1 on a new set of servers. It is a typical lift-and-shift type of installation and configuration. As easy as it might seem, things do get nasty, especially when you haven't configured 11.1.2.1 before and not aware of issues associated with the release.

So coming to one of the most basic issues when installing 11.1.2.1 vanilla version. 
When using smartview ad-hoc analysis, POV selection box is either visible as a separate box or hidden. You cannot place the POV members in the first row, just like excel add-in.

After a brief search on the internet, I found this issue has been resolved in patch 102 of 11.1.2.1.
It has to be applied on Essbase server as well as Application provider services.

Everything is working like a charm now !
Then, suddenly a mail comes : Member Selection for ASO cubes is not working !
Clicking anywhere on the member selection box gives a 'Invalid argument' message.

This time, internet search is not helpful. This error has not been documented anywhere.
Arrrgh ! There is another patch 106 that I will be applying soon on the existing 102. I am hoping it will solve that.

Update : The issue was resolved after patch application !

Fingers crossed !


Monday, May 25, 2015

Using DNS CNAMES for EPM ? Heres what you need to do.

DNS CNAMES are canonical names given to the servers so that they are easily identifiable. E.g. its easier to remember Essbase.production.thexyzcompany.com than us-esb01v-trg.thexyzcompany.com.

The best thing about cnames is whenever a migration or server change happens, users dont have to worry about changing the connection details in their maxl scripts or vba macros.

Using CNAMEs during Installation: At the time of installation, using cnames is best avoided (unless you have a replicated instance). If a cname mapping is changed, we must make sure that everything works fine.

Using CNAMEs by Planners:
Although users can change the server names to cnames in their urls and scripts, you will have to modify the 'datasources.xml' file from Shared Services and rename the connections by replacing the server names with DNS CNAMES. Not that it causes any problem, but in the 'Essbase server' or 'Planning' node of SmartView shared connections, you will see the server name instead of the CName, something that might confuse the users.
 
Check out this blog to see how Datasources.xml is modified. Stalwarts have been doing this for ages !

Do note that the same setting is not available in EPM 11.1.2.4 since it is stored in Shared services registry database.

http://www.orahyplabs.com/2011/03/how-to-get-smartview-1112-shared.html



Cheers !



Thursday, April 2, 2015

FDMEE Basics : Getting Started with this tool



Recently, I’ve been too busy with mediocre tasks like meetings -with old and powerful men- which are usually futile, project health assessments of projects which are already in good health, giving Installation advisories to people who won’t bother to read the Installation documentation even once , Application related consulting which is all over the net etc.

Bored, I thought of learning something on my own, just like ODI.
Now, I had done some FDMEE review in a project (basically, ask an expert to show me how it works) earlier but never quite worked myself. I readied a 11.1.2.4 instance and got started. I will be pasting my experience here, hoping it helps for beginners like me. 

Note: This is a very basic starter kit for disgruntled employees like me who are not given any training and have to learn things on their own (just kidding). We are not going to ‘deep dive’ in FDMEE, rather just do a small ‘snorkelling’ session which will clear the understanding of this tool and a logical flow of how things are done. I am going through the admin guide while I post this so not sure if all steps I post below are absolutely correct.

So, experts can give me their valuable inputs!

Our source is a Flat file (test crap) and target is a classic planning application (some PoC application created ages ago)




FDMEE Self Training : Level: Basic
To start with FDMEE, we first need to understand the flow of things. Similar to ODI, there is a set path of doing things in a logical sequence, geeks call it ‘Algorithm’. The first one being ‘Setup’
In this session, I am going to take Source as a flat file. Now observe where we are:
- Data Management -> Setup -> System Settings -> Profile Type: File
- Application root folder path to be specified should be on the server where FDMEE is running
- Create Location Folder is selected as True  
- Do not forget to click on ‘Save’ every time



Next Step is to create a file source system on our FDMEE.
I’ve kept the ODI context as GLOBAL (in caps) as that is the default one. You need to have some ODI understanding for this, but those who don’t know ODI can safely keep it as GLOBAL. Just remember that ODI keeps all codes in capital letters













Select source system as File. Observe the available options in Source, you will not see ‘EPM applications’ in it. Why? Read on to find out.
This step can be compared to setting the Physical Schema in Topology manager of ODI. The only difference is, here we are doing it only for Source Systems. Does that mean EPM applications cannot be a source in FDMEE ? Definitely not. In the next few steps, you will get the answer



After setting up Source System, we move on to Target Application, observe the subtle difference, Source is a SYSTEM while Target is APPLICATION. I click on the “Add” button here, and select Planning. (Note the target applications: all are EPM )



I then select the desired Planning application.
Note: This page takes some time to load, if it does not, you may want to clear browser history, cookies and offline data.


Once the application is selected, the details are automatically populated. Now, for beginners, we will only focus on some parts of this page. Observe the column headers under ‘Dimension Details’
Dimension Name : The Dimensions present in your planning application have manifested here.
Target Dimension Class : FDMEE has its own way of segregating Target application’s dimension into various classes like : Account, Generic, Entity etc. It automatically maps these classes to Target dimensions. We will leave it there as it is.
Data Table Column Name: Don’t think about it much right now. Leave it as is.
Note: You can also add more rows if you want to do some lookups or filtering. More on that in later sessions



Then comes the most important page: Import Format
In ODI terms, this is the place where you build the interface, i.e. Map source columns to target columns. Click on Add button to start.
Before that, you should see that I have kept a test file in the folder which was specified in the first step of System settings.




Click on Add button (I have down scrolled a little ), You will see that ‘Details’ area needs to be populated.
Here is the answer to the question I asked earlier. ‘Why can’t I see EPM applications in Source System selection? Can’t I use source as EPM?’
Well, you can. See the option of Source Type. It can either be an
ERP (File, Relational table etc) or
EPM (Planning, Essbase etc)
If you select ERP, whatever systems you had added in Source System step will be shown.
If you select EPM, whatever applications you have selected in Target Applications will be shown.
You need to click on Save before you proceed any further. If everything is fine, the Mappings area will get populated



Now, everything will be blank here. Click on the Build Format button. A file browser will open. You will see the file I had pasted in the server. You can upload a file onto the server from your desktop as well. Select the desired file and click on OK 


Let the Mapping Games begin !!

Select, as in, highlight each header one by one, once highlighted, click on Assign Dimension button, Once that is done, enter the source dimension name (anything logical) and select the Target Dimension it will be mapped to.
Also make a note of the field number (column number) to see if you are not going wrong anywhere.


Do the same for rest of it. Ignore columns which are not required to be mapped.

The Trap : I want to map period dimension in source to a Period dimension in target. But wait, I cant see it in the list... hold on...
Oopsie daisy ! FDMEE does not allow me to map Period, Scenario and Years dimensions at this point (Import Format window).
So how do I map it ? We will see it later.
Right now, we will just map all columns.  And yes, Map Data column to Amount (I added Data column to the file later on)




Once the mapping is done, click on Save button.
We will keep the ‘Expression’ tab for a later day.  Right now we will only focus on understanding the flow.


Location Tab: This is the place where you set the Source-Destination combo. So, whenever you select ‘Location’ going forward, it means you are selecting a predefined combination of Source and Destination. In my case it is a Flat file and Planning application called ‘AIPOC’  (Dont bother about the name)

In this window, we select the ‘Import Format’ that we had created earlier. The Target and Source get populated automatically upon selection. Leave other sections as it is.



New ‘Location’ can be seen created. Dont forget to save.

Period mapping: Remember I mentioned that FDMEE does not allow to map Scenario, Year and Period dimensions in ‘Import Format’ section ? Well, it  has a dedicated window for mapping Periods and Years.
For beginners, let’s focus only on the basic Period mapping, In global mapping tab, you need to specify:
- Period Key: The last day of the month to be mapped
- Prior Period Key : The last day of the month before the month to be mapped
- Period Name: Hard code it as desired.
- Target Period Month: Hard code, should match the destination (Planning app) dimension member
- Year Target: Hard code, should match the destination (Planning app) dimension member
Ignore the ones which are blank. The ones mentioned above are mandatory.
This exercise is to be done for all possibilities of your calendar. As an example, I have put only a few.
As of now, I am not sure if there is any other way of doing this but manually entering every combination is a time consuming activity.
Save each entry, else you will lose all the data even if one entry is wrong.
You will have to repeat the same in Application Mapping tab as well.




Next, we come to the point where we need to map the Scenario dimension members (Remember we didn’t map Scenario in ‘Import Export’). Pretty simple, just go on adding rows and mentioning Category and Target Category. The Target category should match the exact text of the Scenario dimension member.
Do the same in Application Mapping tab as well.

 

This marks the end of setting up Interfaces and basic activities for running a load job.
To summarize: These are the steps we follow
1.       System Settings: Specify file path and folder location
2.       Source System: Specify Source (file in our case)
3.       Target Application : Specify Target application (Planning in our case)
4.       Import Format : Specify mappings from Source columns to Target dimensions (except Year, Period and Scenario)
5.       Location: Create a pre-defined combination of Source-Target based on Import Format. This will be used later in workflow.
6.       Period mapping : Specify Year and Period names for Target
7.       Category Mapping : Specify Scenarios for mapping in Target.