Numeric vs String Attributes – which is more efficient?

Introduction

An interesting question was posted on the TM1Forum about the most efficient way to craft a rule. This raised the subsequent question of how TM1 attributes are stored behind the scenes, and the impact of that on performance.

The documentation is vague on this point, and, although the arguments presented in the TM1forum post made sense, there wasn’t a definitive answer.

I thought I’d put it to the test.

The Method

I set up a cube with a large item dimension (4,000 elements) and a measures dimension containing an element for each calculation technique I wanted to test. I then wrote a rule for each measure, using one of the methods suggested in the TM1Forum post, as follows:

  • Measure 1: Used an IF statement to test if the category name ends with a ‘1’ or a ‘2’, and returned 1 if it does.
  • Measure 2: Added a numeric attribute to the category dimension which would contain 1 if the rule should return 1.
  • Measure 3: Added a string attribute to the category dimension which would contain ‘1’ if the rule should return 1.
  • Measure 4: Used the string attribute, but converted it to a numeric value before checking if it was 1 or 0.

I crafted a cube view that contained roll-ups of all the dimensions except the measures. I then calculated the cube view for each measure and timed how long it took to get a result.

I repeated the test three times for each measure, arbitrarily altering the rule and re-saving to ensure TM1 wouldn’t use cached values.

The Results

Interestingly, the test seems to have confirmed the assertion on the TM1 forum post that numeric values are actually stored as strings and converted. Here are the numbers, in seconds and %, for the detail-oriented:

Measure 1

Measure 2

Measure 3

Measure 4

Total

                     39

                     91

                     88

                   108

           326

                     43

                     99

                     97

                   125

           364

                     43

                   100

                     97

                   125

           365

                     42

                     97

                     94

                   119

           352

12%

27%

27%

34%

100%

Note:  the discrepancy between the first test and the subsequent ones can be explained by the fact that my laptop was running on battery for the latter ones.

It is clear that using a simple IF statement, even one containing two SUBST calls in its conditional, is more than double as efficient as using an attribute to determine behaviour. Of course, you get many benefits from using the attribute approach, so in many cases, it is still worth the overhead.

Converting the attribute to its numeric equivalent using NUMBR within your rule is markedly slower than comparing the string without conversion, but there is very little difference between using a string attribute over a numeric one. One is roughly as efficient as the other when reading from the attribute cube.

This means that numeric attributes are indeed most likely stored as strings, but the overhead of the conversion is negligible, and does not operate in the same way as the NUMBR function.

I would guess that internally TM1 skips validation of the string during the conversion, as the UI makes it impossible to enter a non-numeric value into a numeric attribute. The NUMBR function can’t do this, as it can accept any string as input.

I found these results very interesting, and they give us a framework to determine which method is best practice.

Conclusion

So what can we to take away from this small test? Here are a few tips:

  1. Using hard-coded logic is likely to be more efficient. However, hard-coding always goes against maintainability, so only do this if you’re sure the logic will never change, nor will the elements involved! This is a big assumption to make, so be very careful with it! Even when you can be sure, only choose this method if speed is the #1 priority in the model.
  2. When flexibility is needed, use a numeric attribute as a flag to determine rule behaviour. Since there is no significant overhead to using a numeric attribute over a string, the best choice is to go with the numeric. TM1 could one day change the way it stores numeric attributes, which could result in this approach being more efficient than storing at a string. However, it is unlikely that any changes in TM1 will lead to a string attribute becoming more efficient to read than a numeric one.
  3. Never convert your attributes manually using the NUMBR function. There is no benefit to this method, and it is significantly slower, so steer well clear of it.
I hope this article helps you make the right call when writing dynamic rules in the future! Happy modelling!
I’ve attached the model I used for testing, and the spreadsheet containing the results I obtained.

TST Model Data.zip (69.00 kb)

TM1 Attribute Tests.xlsx (10.09 kb)

From Agile to Anarchy (and back again)

Introduction

TM1 development has undergone a subtle evolution over the years.

As a relatively early adopter (I still refer to TM1 as an Applix product occasionally, and I remember the original version of TM1Web) of the technology, I’ve watched this evolution with interest, and feel what I have been doing with Flow, is an attempt to bring that evolution full circle and get back to those early glory days.

In this article, I reminisce, wax lyrical, and take a look at the present and future state of selling and implementing TM1.

Warning: this post may contain heavy doses of nostalgia!

Agile Roots

Anyone else remember a time when the typical TM1 implementation went as follows:

  • Get the client’s general ledger accounts and data
  • Write a few TIs to create the account and cost center structures, and suck in the data
  • Build some Excel reports to show the new data off
  • Hook up a few manual input templates for the tricky stuff
  • Send the users to a TM1 training course
Yes, these were simpler times, times when “Excel on steroids” was the sales pitch and demonstrating the “what-if” feature would get genuine wows from the crowd.
 
We used to sell in a couple of meetings, build POCs in hours, often while the potential client watched, and put together an entire system in weeks rather than months.
 
Perhaps we can remember them fondly, even wistfully. But, sadly, it has been a long time since I was involved in a project that simple, and I believe those kinds of projects are, for the most part, behind us.
 
Now businesses expect a full budgeting and planning application, capable of multiple scenarios and rolling forecasts, and able to collect and collate data from many disparate sources around the globe.
 
TM1 has evolved somewhat to try to meet these needs, but have we evolved as consultants?

Agile Decline

As the Agile methodology became popular in IT and software development projects, those of us in the TM1 sphere were starting to realize we were becoming less and less agile.

I recall speaking on the phone with the owner of a TM1 distributor, discussing the possibility to working with them. This must be two or three years ago now. To my surprise, he started talking about sitting with the customer on a time and materials basis, and building the model with them as they watched and participated.

Of course, I said, “you can’t work like that! We’ve got to lock down the requirements in a formal document, perform a technical design to those specifications, and restrict any modification with a formal change request process!”

It was at that point, in the back of my mind, that it hit me how much TM1 development had changed. The willingness to sit down with a customer, discuss their needs, and build a solution had been replaced with a fearful and almost paranoid IT mentality.

I realized that TM1 modelling and development had become as complex as software development projects, and had evolved to encompass the same rigid processes. The very thing that had originally attracted me to TM1 development — the freedom to build a solution without inflexible requirements — was now gone.

The Problem

So how did we lose the Agile edge that used to define and differentiate TM1 implementations? How did we go from a strong customer focus to formalization, obfuscation and general ass-covering?

The answer is simple. TM1 got bigger — and I’m talking bigger in every way.

Firstly, it was acquired by Cognos, then IBM, and was suddenly thrust into the light of big business. No longer was TM1 the surprising underdog. Now it was expected to go head to head with its enterprise-ready robust big brothers and hold its own.

Concordantly, TM1 started getting larger, more complex implementations. Pre-sales guys who had once gotten away with using the word “scalable” to mean you could add lots of RAM were now being asked if TM1 could be used in a server farm, collecting data from thousands of disparate data sources across global WANs, to calculate an entire organization’s planning, forecasting and consolidation in near real time.

And as a result of all this, we as TM1 implementors got scared. And those of us with an IT background knew exactly what to do: add more layers of process.

However, TM1 did not have the tools to support the Agile processes we were used to following. Deployment was done by manually copying files. Testing was done by manually clicking a mouse. And demonstrations to the customer were performed sparingly, as they took much time to set up and present.

Worst of all, providing any kind of workflow for the customer was embarrassingly lacking. Sure we could fire off external scripts to send out email notifications or SMSes, but the solutions were hardly robust or maintainable.

So we fell back on design and documentation as the crutch to get us through. Write reams of documentation, force the customer to sign off, then quote and build based what was “agreed”.

The fact that describing a financial model in a generic way was often more difficult than building it was neither here nor there.

Reclaiming Agile

Many old-school TM1 implementors have noticed this change, at least on an instinctive level, and tried to develop processes and methods to get back to the old ways. However, most of these were just band-aid solutions, and fell short of tools found in other areas of software development.

Watching this with frustration over the past few years led me to take a step back and look at the bigger picture and think through the problem without letting myself be clouded by prior assumptions.

Flow OLAP is the result those musings, and we hope that our partners are finding value in the tools and applications we’ve released so far.

However, this is just the tip of the iceberg. Keep giving us your support, and we promise to keep innovating until TM1 development has come full circle, and we can reclaim our Agile glory days!

Hey, I warned you about the nostalgia, didn’t I?

Techniques for Building Generic Models

What is a Generic Model?

Generic modelling has the goal of satisfying customer requirements in way that takes into account future needs and enhancements.

Often while gathering requirements, a BA will focus on the customer’s short term needs without considering wider-reaching goals, enhancements, and ongoing maintenance.

In order to use generic modelling techniques, we need to include as much information as possible in our BRDs and use it to design a system that will be more robust, adaptable, and easier to work with and use in the future.

But wait, I hear you say, customers are all different! In fact, each one demands a completely new implementation from scratch! Well, sure, customers will always tell you that their needs are unique and they do things in unconventional ways, and most of them DO have their quirks.

But at the end of the day financial applications end up as one or more lines in GL, no matter how complex the underlying calculations and logic. Isn’t a version a version? Isn’t a currency a currency? Isn’t a year a year, and a product a product?

Legislation and international business standards drive the types and techniques of financial reporting they are trying to produce, so there is bound to be commonality from one customer to the next, even if they are unaware of it.

The idea is to take this commonality and abstract it using generic modelling concepts, in order to reduce the painful redundancy of starting each development cycle from scratch.

This article focuses on the basic development techniques and disciplines you can use in TM1 design and development that will help you achieve a more generic result.

Benefits of Generic Models

Generic models are characterized by the following beneficial features.

Reusability

Ever feel like you’re writing the same calculation logic over and over again? Well, chances are, you are doing exactly that.

Let’s face it, currency conversion rules, for instance, are always eerily similar each time you do them. Likewise, rules to roll forward opening balances.

And what about cube structures? Have you ever noticed that a Loans module is similar to, say, a Project Capex module? Not exactly the same, but each is a list of repeated payments over time, with various other values calculated from that core structure.

If you build with generic principles, you can easily take your Capex module, copy it, add some measures and adapt the rules to satisfy the requirements of a Loan module. This saves hours of design and development time, and keeps the model consistent and familiar to users.

Adaptability

The more generic a model is, the easier it is to make changes.

For example, if you write generic currency rules for each cube, and even include currency conversion in cubes where the customer has said it is not necessary, it is much easier to support additional currencies if the customer requires them later.

And if you are attempting to take an Agile approach to an implementation, being able to change or update a design quickly, and have development respond, is one of the most valuable abilities you can have.

Maintainability

If a model is designed using generic techniques, it is built from the ground up with maintainability in mind. Your goal here is to allow an administrative user to adapt model behaviour without requiring advanced development skills.

Any rules or processes should remain unchanged, with only the inputs to their calculation logic — be those lookup cubes, attributes, or process parameters — requiring update. This helps the customer maintain their own application, without feeling overly reliant on the original consultants who implemented it.

Deployability

Often you’ll want to change the behaviour of a model from development to staging to production. Generic models only require a data change to update their behaviour, so are much easier to maintain in a multi-tiered deployment scenario.

Reduced Risk

When you reuse the same structures and techniques instead of making code changes to adapt a model’s behaviour, you’re inherently reducing the risk of introducing new bugs into the system. To some extent this also alleviates testing requirements, as you only need to test the reusable portion of the module once.

Downside of Generic Models

Speed/Efficiency

Writing rules or processes in a generic way will inevitably add some degree of overhead to your calculations. This is unavoidable, but can be mitigated with various strategies and intelligent design.

Readability & Handover

Often generic rules and TI processes involve more complex code, with frequent lookups and if statements that can confuse developers who might be new to the project.

Likewise, if a customer wants to hire an in-house developer to maintain the model you’ve built for them, they may have some difficulty understand what was done and why in the original design.

This is why it is important that the code is formatted well and commented with meaningful explanations.

How to Build Generic Models

When all is said and done, building models in a generic way is really just a set of disciplines. You need to discard some of your old habits and begin to think generically with every rule and TI you write!

Use lookup cubes and attributes

Hard-coding is the enemy of generic models. If you have a literal string in your rules or TI, you better be sure the thing that string represents is never going to change!

For instance, imagine you are writing a rule that copies the closing balance of the previous year to the current year. Your first thought might be to write a rule like this:

['Opening Bal', '2014'] = N: DB('This Cube', ...,
    'YTD Dec',
    '2013',
    !This Cube Measure
);

Of course this would work, but why limit it to one year? Use an attribute so you don’t have to write the rule again for next year, like so:

['Opening Bal'] = N: DB('This Cube', ...,
    'YTD Dec',
    ATTRS('Year Dim', !Year Dim, 'Previous Year'),
    !This Cube Measure
);

Looks pretty generic, now. But what if you wanted to take this model and deploy it in Australia? The fiscal year starts in July in Australia, so using ‘YTD Dec’ isn’t going to cut it. Assuming you’re using consolidations to tally up your YTD figures, you can restructure the dimension easily enough, but you need to draw your opening balance from YTD Jun.

Again, we can use a lookup to help:

['Opening Bal'] = N: DB('This Cube', ...,
    DB('Lookup Cube', !Year Dim, 'Fiscal Year End'),
    ATTRS('Year Dim', !Year Dim, 'Previous Year'),
    !This Cube Measure
);

You could take these ideas even further. If you want to explore more specific techniques for supporting flexible time dimensions, check out our previous article “Flow’s Date Lookup cube – Back to the Future!” and its sequel, “Back to the Future, Part II“.

Never depend on a particular structure in code

The problem above arises because we have assumed that our YTD values will be structured in a certain way. To some extent, you can avoid assuming a few fundamentals about your design as you code rules and TI, but you can certainly minimize them.

For instance, what if, instead of changing the structure when we want to set a different end of fiscal year month, we decided to name our month elements generically? If we simply called our months, ‘M00’, ‘M01’, ‘M02’, etc, up to ‘M12’, would we have a more generic result?

Well, yes, we could directly reference ‘M12’ in our opening balance rule, which would be attached to ‘M00’. This would mean we wouldn’t need to adjust our YTD consolidations to accommodate a different fiscal year start month.

Use aliases as display labels

However, the above solution is a little aesthetically ugly and not particularly intuitive!

To solve this, we could add aliases to each of these elements to make the display names fit in with our fiscal year period. When deploying in Australia, we could simply name our ‘M01’ element ‘Jul’ to update the behaviour, and remove the need for the costly lookups. In this way, our consolidations could remain static, and would need to be changed to deploy to a different region.

You can apply this technique to many other dimensions, where you substitute an arbitrary code for the primary element name and use aliases to determine the display value. This can make models that were appropriate for one client more transferable to another, and makes your development more agile should a customer change their naming preferences.

Just make sure you save all your subsets with the display alias turned on, and use those subsets in all your important views!

Use standard time dimensions

There has been much discussion about standard time dimensions in the TM1 community, and it seems not everyone agrees. This is just a matter of emphasis. Some place more emphasis on the browsability of cubes and views, while others seek a more flexible and maintainable approach.

Whichever time dimension structure you choose, it is of great benefit to standardize these across your entire module. Even if your users would like to vary the form in which they input data, you can standardize reporting cubes to ensure users get a consistent view.

Check out our previous article “An Item-based Approach to TM1 Model Design” for more information on standardizing reporting cubes across the entire model.

Use a common dimension order

Likewise, it is preferable to keep the order of dimensions as standard as possible. Some developers like to optimize each cube individually by tweaking the dimension order, but in the age of large amounts of available RAM, I believe it is much more preferable to have a predictable dimension order.

This cuts down development time and removes the need for developers to be constantly opening up cubes or using the DB formula generator to ensure the dimension order is correct. It also makes rules much more readable and assist handover to new developers or on-site admins.

Use parameters & global variables to define the behaviour of TI processes

TI processes are generally purpose-built, especially those generated with the wizard. But if you put in a bit of extra time to write these processes generically, you can often use them on future jobs and will save time overall.

Have a look at the our articles “Array Types in Turbo Integrator” and “Returning values from Turbo Integrator processes”  for techniques and tricks to make processes work in more generic ways.

Use dynamic (MDX) subsets and saved subsets effectively

Well-designed dynamic subsets are a great way to ensure changes in your element names or dimension structures will have a minimal impact on saves views and the reports that are based upon them.

Likewise, saved subsets can be utilized to set values such as current month and year. This way, all you have to do is update the named subset, and all the reports and views based on it will reflect the change.

Try to avoid unnamed subsets as much as possible, as these will just lead to confusion and errors down the road.

Create dynamic reports

This is probably a whole article in itself, but when you are writing Excel-based reports, it’s very easy to create them in such a way that they will break if underlying structures or names change.

Create named lists with SUBNM formulae to ensure rows and columns do not get out of sync. Use dynamic paging techniques to display small subsets of data that the user can scroll through. And finally, judicious use of active forms (yes judicious, as Active Forms can become very cumbersome very quickly if you overuse them!) can keep your reports and templates dynamic and responsive.

There are many other tricks you can use, but the main point is, think through the impact of changes to the underlying model and try to develop strategies to help your reports survive the turmoil!

Conclusion

This article has only scratched the surface of the various ways you can create more generic models, just by adopting a few habits and disciplines in your modelling work.

Sure, there is a bit of extra work up front, but the pay-off is huge, and increases exponentially over time! Each implementation, you’ll add to your bag of tricks, and this will leave you with more time to attend to the finer details and give your customer a higher degree of satisfaction.

Flow Genesis Terms II – defining Diagnostic Reports, Templates and Deployment Packages

Introduction

Ah, the much-anticipated sequel! Will it exceed the beloved original, like “The Godfather Part II”, or will it be a massive disappointment like “Hannibal”?!

This is a follow-up article to “Flow Genesis Terms I – defining Models and Snapshots“, so please read that article first to get the full perspective.

What is a Diagnostic Report?

The Diagnose tab in Flow Genesis allows you to automatically generate a list of common issues associated with any snapshot you have uploaded. This is a useful check to perform before deployment or UAT, as it can often catch errors that are difficult to detect with the human eye.

You can then save the list into a Diagnostic Report to keep a permanent record of issues found. The file created is referred to as a Diagnostic Report, and contains everything you can see on the screen when you diagnose a snapshot.

Once saved, you can also add Sticky Notes to explain or discuss errors with members of the team. This provides a useful method of collaboration to make sure things are not missed during any particular deployment.

It also provides a way for non-technical team leaders or project managers to verify that no issues exist that have not been sufficiently explained by the developer team.

Note the in the future, we will be adding a feature to export a Diagnostic Report into a more easily-readable issue-list document, so it’s worth saving reports now to take advantage of that additional feature when it becomes available.

What is a Template?

If you’ve ever worked on a multi-country or multi-department roll-out, chances are you’ve already thought about templating without realizing it.

Often when similar models are to be deployed in multiple places, the approach taken is to create one generic template model, then use that as a basis for deployment to the other areas. The generic model can be tested and verified by the customer first, before any area-specific customizations are carried out.

In Genesis, a Template refers to an abstracted Snapshot file which allows users to provide specific customizations for designated objects before deployment.

This is useful in POC development or when starting a project, as it gives you a pre-tested and risk-free starting point for any custom development without locking you into a particular naming standard.

The template file contains all the same data included in the Snapshot, along with instructions on how the template will behave when it is applied.

The output of applying a Template is a new Snapshot containing the customizations specified by the user. This Snapshot can be used to deploy a new server, or to upgrade an existing one with the new names.

We have big plans for the Templating feature of Genesis, including a wizard-based interface for many common customizations beyond naming, so it’s well worth investigating how it can help your business processes now.

What is a Deployment Package?

So you have your model completed, your Templates applied and your Snapshot files created… what next?

That’s where Deployment Packages come in. A Genesis Deployment Package is basically a list of instructions to carry out on a model to convert it from one state of development to another.

More simply put, it stores the differences between two Snapshots so that one can be automatically changed into the other.

The first step toward creating a Deployment Package is to perform a “diff”, or comparison between two Snapshots. Diff is a computing term for an operation that identifies differences between two files, in our case the two Snapshot files.

It is important to note that in almost all cases, the first Snapshot used as the basis for the Deployment Package should be a Snapshot of the server your are deploying to. This is to ensure all the generated instructions are valid for the target server. It is sometimes valid to violate this rule, but make sure you understand exactly what you are doing.

The second Snapshot should be the one that represents the new state of the target server.

So, for instance, if you have completed UAT on a staging server, and would like to deploy changes to production, the first Snapshot should be an up-to-date version of the production server, and the second snapshot should be an up-to-date version of the production server. This would result in a series of instructions that would implement all the recent development on the staging server into the production server.

When exported, Deployment Packages have the file extension .xdpd, which stands for “XML Deployment Package Document”. To perform the deployment, you can load the generated file into the Flow Model Packager.

Conclusion

I know sequels are often more convoluted and confusing that the original, so I guess there is a little “sequelitis” happening here!

Hopefully this gives you a better understanding of the terms and features used in Genesis and how you might apply them to your business. Trust me, once you’ve mastered Diagnosis, Templating and Deployment the Genesis way, you’ll wonder how you ever did without it!

Any questions, feel free to use the comments in this article.

Flow Genesis Terms I – defining Models and Snapshots.

What is a Snapshot?

A snapshot is an XML file that contains information about the structure (metadata) of an OLAP model.

Although it sounds like a simple idea, snapshots can have some very powerful applications. They allow the Flow applications to read and display information about a model without being connected to the TM1 Server via the TM1 API. This means tasks can be carried out “offline”, without the risk of disrupting day-to-day TM1 Server functions.

A snapshot file is also a version. That is to say, it is the model at a specific point in time. This makes tasks such as backup, restore, archiving and version tracking possible, and much easier than they would be with a typical TM1 implementation.

Snapshots have been designed to be OLAP database agnostic, meaning they could potentially also describe an OLAP structure in a product other than TM1. However, Flow’s focus is exclusively on TM1 at present.

When exported to a file, Snapshots have the file extension “.xssd”, which stands for “XML Snapshot Document”.

What is a Model?

It’s tempting to assume the terms Model and Snapshot are interchangeable, but they actually have very specific meanings in Genesis.

A Model is simply a container for one or more Snapshots. In its most common usage, you can think of a Model like a project, with the snapshots it contains being the many versions that the project goes through during the development and production life cycle.

Sometimes you might utilize the Model concept in your own way, and that’s ok. Genesis was designed as a tool that lets you work the way you want without prescribing a rigid methodology.

However, generally, we would recommend using the Category field to collect Models into wider groups, such as customer, department or purpose.

What’s inside a Snapshot?

A snapshot contains XML markup to describe the structure of an OLAP Model. From this information, it is possible to perform various analysis of the server structures, automatically recreate the server from scratch, or update an older server with newer structures and code.

To understand snapshots better, you can open one in a standard text editor and review the markup.

You can see standard XML structures within the file, with each “node” having a simple begin and end tag, with optional attributes. Each node can contain other nodes inside it, as in the following example:

<Node att1="val1" att2="val2">
  <Node att1="val1" att2="val2"></Node>
</Node>

Once you understand the basic XML structure, it becomes possible to decipher the contents of the snapshot file. It can be a little difficult at first, because the markup is designed to be as compact as possible, but let’s review the core elements.

The root node is always Database. Hence, the most simple Snapshot possible would be the following:

<Database ID="1" NM="TM1_Server_Name"></Database>

This simply declares the database with an ID and a Name (NM).

Within a Database, a Snapshot can also include Stats, which are pieces of information about the database, and then any Cubes, Dimensions and ETL Processes it contains.

So equally, the following is also a valid representation of an empty server:

<Database ID="1" NM="TM1_Server_Name">
  <Cubes></Cubes>
  <Dims></Dims>
  <Procs></Procs>
</Database>

Each of the “Cubes”, “Dims” and “Procs” nodes can contain multiple markup descriptions of Cubes, Dimensions and ETL Processes respectively. From this simple structure, we can build up a more complete description of a TM1 server, like so:

<Database ID="1" NM="TM1_Server_Name">
  <Cubes>
    <Cube ID="1" NM="Cube1" DimIDs="1 2 3">...</Cube>
    <Cube ID="2" NM="Cube2" DimIDs="1 2 3">...</Cube>
    <Cube ID="3" NM="Cube3" DimIDs="1 2 3">...</Cube>
    ...
  </Cubes>
  <Dims>
    <Dim ID="1" NM="Dim1">...</Dim>
    <Dim ID="2" NM="Dim2">...</Dim>
    <Dim ID="3" NM="Dim3">...</Dim>
    ...
  </Dims>
  <Procs>
    <Proc ID="1" NM="Proc1">...</Proc>
    <Proc ID="2" NM="Proc2">...</Proc>
    <Proc ID="3" NM="Proc3">...</Proc>
    ...
  </Procs>
</Database>

Note that each object has an ID assigned. This is to make the markup more efficient, so when one object needs to refer to another, it can use it’s ID instead of the full name of the object.

A good example of this is where a Cube defines the Dimensions it contains. This is denoted by a list of IDs of the Dimensions, as opposed to a list of names, which could become quite verbose.

It is beyond the scope of this article to discuss the detailed structures of each object in a Snapshot file, but hopefully this gives you enough of an overview to get a basic understanding of how a Snapshot file works, and what the various terms associated with them mean.

We are currently working on a detailed whitepaper detailing the complete Snapshot format, in the hope it may form the basis of an open standard within the OLAP modelling community.

What does a Snapshot NOT contain?

Snapshot files are data-agnostic, so do not contain any data. The reason for this is both to ensure data security and separation of purpose — it makes sense that the design of a cube should be separate from the data contained within it.

There are already plenty of other formats that can handle data well, such as CSV or Tab Delimited text files. We recommend using those standards to import and export data to and from your TM1 servers.

Hidden control objects (such as TM1 “}ElementAttrbutes” cubes and dimensions) are also excluded, but are represented by more generic XML structures so that the information is preserved in a database-agnostic way.

How can I use a Snapshot?

Once you have developed a TM1 model, you can create a snapshot file using the Flow Model Packager.

Snapshot files can then be uploaded to Flow Genesis to be used as the basis for Exploring, Diagnosing, Templating and Deploying TM1 Models.

You can also save the snapshot files for later use, or archive them. If you require version control, Snapshot files are ideal for adding into a Version Control repository, such as SVN, CVS or Microsoft Visual SourceSafe.

Because Snapshot files are implemented with an open standard, you can also build software to read, write and manipulate Snapshot files.

This opens up a whole wealth of application possibilities, which we hope will lead to some innovative software from the TM1 and wider OLAP developer community.

Conclusion

Hopefully this article will give you an insight into Model and Snapshots, how they are different and how they work within the context of Flow Genesis.

Part II of this article will cover the other file types and standards in Flow Genesis.

Introducing the Flow TM1 Rule Editor by Ricky Marwan.

RuleEditorIcon2We’re proud to announce the Flow TM1 Rule Editor, written and maintained by Ricky Marwan.

The TM1 Rule Editor allows you to edit and save your TM1 rules in an innovative and convenient IDE.

  • Formatting and coloured syntax highlighting make reading and organizing your rules easy.
  • Drag and drop cubes to create DB formulae, or just drag single dimensions to get the bang syntax string.
  • The tool also includes a complete palette of valid TM1 rule functions. Simply drag the function to the rule editor and you’ll get the function string along with parameter hints.

Go here for the download and try it out!

InfoCube Spark – making TM1 server monitoring classy!

Introduction

Today, I spent my time configuring and evaluating Spark Monitor, a new product released by Ben Hill from InfoCube.
I was pleasantly surprised by the simple yet attractive design of the site. Good visual design is something that is often overlooked in this industry, so it’s nice to a see a site that’s clean and functional, without looking plain or ugly.
I checked out the online demo (username: tester, password: abc123) and found there was substance behind the eye candy, so decided to sign up, add my server and see how it all worked.
I thought it well worth covering here, so what follows is a brief description of the service, my experience in setting it up, and a list of gotchas I came across.

What is Spark Monitor?

So what does Spark Monitor actually do for you and your clients?

First up, it gives you a way to constantly monitor your server resources, including hard drive space, memory and CPU usage. Often these benchmarks are the first indicator that something may have gone wrong with a TM1 server, so they can be very useful metrics.

In addition, Spark combines the information from TM1Top into a neat and easy-to-read chart, with a table showing the current threads and users.

There is also a very useful table that shows the most recent TM1 log entries, so you can monitor any Turbo Integrator processes and their return status. You can even perform advanced searches of the log files like you can in TM1 Perspectives.

Spark provides you with all these features in a hosted online environment, so all you need to get started is a web browser. You can even make the stats available to read-only users in your client’s organization.

InfoCube Spark is a hosted-only service at the moment, which keeps things simple, but at the same time might be a problem for highly-secured TM1 servers that can’t get internet access.

How does it work?

Spark Monitor uses a very simple and easy technique to ensure the monitor is constantly displaying up-to-date information.

A Monitoring program is installed on the TM1 server and scheduled to run periodically. This program uses the TM1Top and Windows APIs to gather data about the TM1 server and send it to the Spark web application.

The clever part is that this program pushes all the required server information up to the Spark web application (I assume via a web service), rather than having the web application try to pull data down. This means all you need is a regular out-bound internet connection on your TM1 server, while keeping your TM1 model data and structure secure with your regular firewalls and other protections.

Since Windows Task Scheduler is used to trigger the program to run, you can pretty much schedule it any way you want, to make sure your server is monitored at the frequency you desire.

Setting it up

The instructions for setting up the Spark Monitor are available on the website, but I’ll detail a few gotcha I found here so you don’t fall into the same traps.

Firstly, you need to make sure you register your account with the Spark website and then add a new server. The names you give here are for your own identification purposes only, and do not necessarily need to match the names on the TM1 server (although it would make sense to name them this way).

This will get you an automatically generated “server key” which is the identifier that links the server entry on the Spark web application with the Monitor program installed on the TM1 server.

The Monitor program does not come with an installer, so I simply copied it into the “Program Files (x86)” folder manually. You can pick any location you like, as it will eventually be run automatically by Task Scheduler and you won’t have to worry about it.

If you run the Monitor program once, it will create an empty config.cfg for you, which has the basic settings you need. I found it left out one setting, tm1s_bin, so I had to create that manually in the file. The instructions do a pretty good job of helping you get that set up correctly.

If you create your server instances with the Flow Server Manager, like I do, you’ll need to locate the tm1s.cfg file for use with Spark. To do so, just go to “C:\Users\(your username)\AppData\Local\Flow OLAP Solutions\Flow Server Manager\ServerManagerStore” and search for the correct tm1s.cfg in the sub-folders there.

That reminds me to add a “Open tm1s folder” option in the Server Manager to make this sort of thing easier!

Once you’ve got the config.cfg file set up correctly, you can run the program by double clicking and test the results. If you log in to Spark and see your data, you’ve been successful.

If not, the best way to troubleshoot the problem is to look at the Monitor.log file that gets created in the program folder. That will usually tell you what’s going wrong and give you an idea what is configured incorrectly.

Once the program is running correctly and sending information to the Spark web application, all that’s left to do is schedule the program to run periodically. This is very easy if you follow the screenshots on the help page, but I just need to mention one gotcha which cost me an hour of frustration!

Due to this Microsoft bug (or is it a feature) in Task Scheduler, you need to make sure you don’t include quotes around the “Start In” folder when adding it to Task Scheduler. If you do, you’ll get a very ungraceful failure, with an error code and a help link that goes to a missing Microsoft web page!

My Impressions

This is a very useful tool for TM1 administrators and IT departments, and one which will present well in a sales presentation, especially with a technical or IT audience (assuming you can get internet access during the demo).

The functionality is great, and expanding very quickly, as Ben Hill and his team are working on it actively at the moment.

I got in contact with Ben to discuss the product and give him some feedback. He was very responsive and enthusiastic about the product, and when I pointed out a minor security flaw I found in the system, he had it fixed within minutes.

Next on his to-do list is the ability to create “consultant” accounts, which would allow TM1 partners to create and manage server groupings for multiple customers. This would be a great addition, as the majority of Spark users will probably be TM1 partners or consultants with multiple clients.

At Flow, we applaud Ben Hill and InfoCube for this initiative. It’s great to see other companies giving back to the TM1 community for the greater good of the industry, and will ensure they get our full support.

Feature Requests

To improve the user experience, I had a few items on my feature wish list.

First up, the Monitor program could be improved considerably with the addition of an installer and a configuration UI. This would avoid the need for manually copying files, editing configuration details, and messing around with Task Scheduler. Those gotchas I listed above could all have been avoided with an intuitive setup and configuration application.

It appears to have an “automatic update” application included with it, but I did not test that, as I could not see any instructions for it on the Spark website. It would certainly be nice to have the program automatically update itself if need be.

On the web application side, a few other features would be highly desirable.

A notification system that would email you when certain triggers are met, such as the server disappearing, or memory creeping over a certain % would give the program added depth. If users could subscribe to notifications and even create their own trigger thresholds, all the better.

I would also like the ability to edit server details once they have been added. Right now, if you want to change something, you have to delete the server, then add it again, which means you get a new server key and have to dig into your Monitor config files again.

And last but not least, I’d like to have the ability to make the dashboards refresh on a specified schedule, without having to repeatedly click the browser’s refresh button. Even better, the web application could support dynamic (“ajaxified”) screen refresh, so the charts and other dashboard elements could update without refreshing the entire screen.

Given that the Spark web application already predicts how often your Monitor program is configured to update, I would suspect that this functionality is already in the works.

Conclusion

Minor quibbles aside, the InfoCube Spark Monitor is well worth adding to your TM1 bag of tricks.

It’s a completely free service, so why not take advantage of the value it adds for you and your clients?

I’ll leave you with a few screenshots of the application.

And, as always, happy modelling — or in this case, happy monitoring!

 

Array Types in Turbo Integrator

Introduction

I noticed a 4-year old post over on the TM1 Forum site about simulating array variables in Turbo Integrator.

It has been a persistent topic, so I went back through the thread and was surprised to see that there has been no mention of using character-delimited strings are array variables. I do this often, and it is one of the foundational concepts in libraries like Bedrock, so I thought it was worth covering in some detail.

In this article, I’ll also be using the concepts covered in my Returning Values from Turbo Integrator article, so be sure to read that too if you haven’t already.

Array Approaches

Most scripting and programming languages have support for arrays. However, unfortunately TI is not one of these, and there are many times when such a data type would be advantageous.

The thread on TM1 Forums covers a few approaches, but here are the main ones:

  1. Multiple variables
    This approach is as simple as it gets. Simple define variables with sequential names, such as V1, V2, V3, V4, etc, up to the maximum size you expect your array to be. You can iterate through these variables in a WHILE loop by incrementing an index variable, then having an IF statement for each array variable you have declared.You can immediately see how this can become highly cumbersome and unmaintainable for arrays of more than a few elements, as you need an IF statement for each one!
  2. Using a temporary dimension
    This approach is a bit smarter, as it avoids the multiple IF statements. You simply have your TI process create a temporary dimension and use that to store your array values. The element names can be strings, but if you need to store numbers, you can use elements named Item1, Item2, Item3, etc, and store the value in a numeric attribute.This can be a great approach, but can be a bit messy, and has concurrency issues. You need to create a temp dimension name that would be unique each time the TI process is run, and if you’re running many TIs frequently, finding a unique name can be difficult. You also need to handle deleting the dimension once you’re done.

    It is also possible to use a permanent dimension for this purpose and just create and destroy attributes as needed. Again, this has concurrency issues, as you need unique attribute names, and can be confusing to the user.

    In both cases, it seems a very heavyweight solution for something as simple as arrays, and might not perform as well as a solution that avoids TM1 objects.

    You also need to worry about the maximum number of elements in the item dimension, as that forms the upper limit of the array.

  3. Using a temporary cube
    This approach is similar to the above, but the values are stored in a cube instead of element names or attributes. This has all the inherent caveats and benefits as the dimension approach, but is slightly more visible and accessible to a user, which can be an advantage in some cases.
None of these solutions are perfect, but they will serve their purpose. There is a fourth option, however, which is the focus of this article.

Using Character-Delimited Strings

The concept behind this is very simple. Instead of using TM1 objects to store arrays of values, we use a string separated by a particular character. A simple example would be “V1,V2,V3”.

In Turbo Integrator, it’s possible to split this string using SCAN to find the locations of the commas and SUBST to split the list into its component values.

This does not use any TM1 objects, and is appealing in that it’s a native TI solution. It should also be relatively fast, as SCAN and SUBST are very simple functions and well-optimized for performance. It is also very easy to pass values between processes using the technique detailed in my previous article Returning Values from Turbo Integrator.

There are a few drawbacks:

  • There is an upper limit to string length in TI, which you can hit with large lists. For TM1 10, the limit is 65k characters.
  • Storing numeric values involves a conversion from string to number, which can affect performance
  • Coding such an approach is cumbersome, and you often find yourself writing the same or similar WHILE loops, which clutter up your TI processes
For small lists, the first two points are not a major issue. However, the third point never goes away, and writing the same code over and over makes your processes hard to maintain and error-prone. Since debugging is very difficult in TI, you don’t want to write more code than is absolutely necessary or you can quickly find yourself chasing your tail!
To mitigate this, we need to use a best practice approach to re-usable code. This can be done in TI by creating a series of well-defined library processes. Combining this with the concept of returning values, we can encapsulate all of our array processing tasks in our library processes, avoiding the need to repeat the code patterns and making your code maintainable once more.

An Array Processing Library

Rather than leave the task of writing library functions as an exercise for the reader, I decided to take a shot at writing it myself.
The Flow array processing library contains a number of useful functions to assist in using delimited lists as arrays:
  • ItemRead: reads an item from the array at a specified index
  • ItemCount: Returns the number of items in the list
  • ItemAdd: Adds an item to the list at the specified index
  • ItemRemove: Deletes an item from the list at the specified index
  • ItemReplace: Update the value at a specified index with a new value
  • ItemFind: Locate a value in the list and return the index of the value, if found
Using these basic functions, one can perform most array functions without worrying about the implementation details. I make no warranties about the quality of the library, but it is, at least, a great starting point for a more robust implementation.
It has the following features:
  • Value quoting: What is the value in the list wants to include the character that is being used as a delimiter? The library supports this by supporting quoting. An example would be: “1,2,|3,4|,5”. If you specify the bar character (|)as your quote character, the library will retrieve the third value in the list as “|3,4|”.
  • Opening and closing quote characters: The library supports specifying an opening and closing quote character. This allows you to specify the above example as “1,2,[3,4],5”, which is much more readable. If the closing quote character is not specified, it is assumed to be the same as the opening quote character.
Desired features:
  • Escaping quote characters: Currently the library does not support escaping the quote characters within the value. This means you cannot use either of your quote characters in the value, or it you will get unpredictable results. Ideally, the library would detect quote characters within the list values and escape them automatically, and un-escape them when reading the back out.
  • Error Handling: At present the error detection and handling in the library is rudimentary. If a list is badly formed, it would be difficult to detect and resolve in code.
  • Performance: The current implementation reads the entire list multiple times, which can become exponentially slower in certain implementation patterns. Ideally, the library would support ways to optimize algorithms and perhaps a caching option. However, as this library is designed for small lists, the law of diminishing returns may apply to such features.

Using the library

A sample process is included to demonstrate the basic functions of the library.

The following code:

pItemList = 'A,B,C,D,E,F';
pOutputFolder = 'C:\Temp';

NumericGlobalVariable('OutputItemCount');
StringGlobalVariable('OutputItem');
StringGlobalVariable('OutputItemList');
NumericGlobalVariable('OutputItemIndex');
NumericGlobalVariable('OutputItemLocation');
NumericGlobalVariable('OutputSearchItemIndex');

vProcessName = 'Flow.String.List.Sample1';
vOutputFile = pOutputFolder | '\' | vProcessName | '.Output.txt';

ASCIIOUTPUT(vOutputFile, 'Working on list [' | pItemList | ']');

ExecuteProcess('Flow.String.List.ItemCount', 'pItemList', pItemList);
vItemCount = OutputItemCount;
ASCIIOUTPUT(vOutputFile, 'Item count: ' | TRIM(STR(vItemCount, 4, 0)));

ExecuteProcess('Flow.String.List.ItemAdd', 'pItemList', pItemList, 'pItemIndex', 4, 'pNewItem', 'Added');
ASCIIOUTPUT(vOutputFile, 'Item added at index 4: [' | OutputItemList | ']');

ExecuteProcess('Flow.String.List.ItemRemove', 'pItemList', OutputItemList, 'pItemIndex', 2);
ASCIIOUTPUT(vOutputFile, 'Item removed at index 2: [' | OutputItemList | ']');

ExecuteProcess('Flow.String.List.ItemReplace', 'pItemList', OutputItemList, 'pItemIndex', 5, 'pNewItem', 'Replaced');
ASCIIOUTPUT(vOutputFile, 'Item replaced at index 5: [' | OutputItemList | ']');

ASCIIOUTPUT(vOutputFile, 'Finding index of item "F"...');
ExecuteProcess('Flow.String.List.ItemFind', 'pItemList', OutputItemList, 'pSearchItem', 'F');
ASCIIOUTPUT(vOutputFile, 'Index of search item: ' | TRIM(STR(OutputSearchItemIndex, 15, 0)) );

ASCIIOUTPUT(vOutputFile, 'Finding index of item "G"...');
ExecuteProcess('Flow.String.List.ItemFind', 'pItemList', OutputItemList, 'pSearchItem', 'G');
ASCIIOUTPUT(vOutputFile, 'Index of search item: ' | TRIM(STR(OutputSearchItemIndex, 15, 0)) );

ASCIIOUTPUT(vOutputFile, 'Listing all current items...');
vCurrentItemIndex = 1;
vCurrentItem = '';
WHILE(vCurrentItemIndex <= vItemCount);
	ExecuteProcess('Flow.String.List.ItemRead', 'pItemList', OutputItemList, 'pItemIndex', vCurrentItemIndex);
	vCurrentItem = OutputItem;
	ASCIIOUTPUT(vOutputFile, '[' | TRIM(STR(vCurrentItemIndex, 4, 0)) | ']' | ' = ' | vCurrentItem);
	vCurrentItemIndex = vCurrentItemIndex + 1;
END;

ASCIIOUTPUT(vOutputFile, 'Listing all original items...');
vCurrentItemIndex = 1;
vCurrentItem = '';
WHILE(vCurrentItemIndex <= vItemCount);
	ExecuteProcess('Flow.String.List.ItemRead', 'pItemList', pItemList, 'pItemIndex', vCurrentItemIndex);
	vCurrentItem = OutputItem;
	ASCIIOUTPUT(vOutputFile, '[' | TRIM(STR(vCurrentItemIndex, 4, 0)) | ']' | ' = ' | vCurrentItem);
	vCurrentItemIndex = vCurrentItemIndex + 1;
END;

Yields the following output file:

Working on list [A,B,C,D,E,F]
Item count: 6
Item added at index 4: [A,B,C,Added,D,E,F]
Item removed at index 2: [A,C,Added,D,E,F]
Item replaced at index 5: [A,C,Added,D,Replaced,F]
Finding index of item "F"...
Index of search item: 6
Finding index of item "G"...
Index of search item: 0
Listing all current items...
[1] = A
[2] = C
[3] = Added
[4] = D
[5] = Replaced
[6] = F
Listing all original items...
[1] = A
[2] = B
[3] = C
[4] = D
[5] = E
[6] = F

As you can see, the sample code is much cleaner and more maintainable than more ad-hoc implementations of the same technique.

Conclusion

There is no perfect solution to simulating array variables in Turbo Integrator, but there are some work-arounds that can work in various scenarios.

Hopefully the library I have provided will help other developers get started using the character-delimited string technique, and will at least serve as an example of reusability and maintainability in Turbo Integrator processes.

Flow.String.List.zip (8.03 kb)

.NET C# TM1API starter kit.

Introduction

One of the most powerful (and possibly overlooked) features of TM1 is that it supports a fully-featured programming API (Application Programming Interface).

This means that virtually any function supported by the server is exposed to external programming environments. Or, put more simply: you can write your own TM1 application, and your imagination is the limit!

Working with the TM1 API in Visual Basic is pretty easy, as the functions are available anytime the TM1 add-in is loaded. This is why a lot of developers gravitate toward working this way, as it is familiar and quick to get started.

Unfortunately, it also comes with the limitation of being hosted in the Excel environment, and cannot produce a stand-alone Windows or Web application.

Recent versions of TM1 include a .NET API that you can use in your applications, but this API does not support all the functions of the native COM API. It was initially developed to support the requirements of TM1Web and supports mainly read-only operations.

Microsoft’s .NET framework software development platform has grown the past decade to become one of the most comprehensive framework libraries out there. It’s free to use, supported by Microsoft, and is a great choice for quickly creating native Windows applications.

But how can we get the best of both worlds?

Note: this article assumes a basic level of C# programming capability and access to a 2010 version of Microsoft Visual Studio.

The TM1 COM API in .NET

It’s well-known that you can call COM dynamic link libraries from .NET. However, the protocol for doing so can be quite difficult to implement, especially when the DLL in question is not documented in an easy-to-translate way.

In .NET you need to know the underlying names and entry points of every single function call, and then map the COM data types to equivalent .NET data types.

You can analyze the tm1api.dll file using a tool like Dependency Walker, which gives you information about each function contained in the DLL. By default, the parameter and return types are not listed in any useful form, but if you right click in the window and select “Undecorate C++ Functions”, you’ll see that the output becomes a whole lot more interesting.

Here is an example output from Dependency Walker:

As you can see, all the functions are listed out with their associated function names and signatures. However, there are a lot of functions, so typing each one out would take a long time.

TM1 includes a full list of function signatures, but this is either a BAS file or a C header file. Those don’t help us with C#, but it turns out it’s pretty easy to convert this BAS to C# — I used an online VB.NET->C# converter, changed the longs to ints and switch any function with a _VB suffix to the C version. Since I’ve already gone through the process, you can use the one included in the sample package.

Disclaimer: I make no warranty or guarantee that the included code is complete, correct or bug free. Use it at your own risk.

Loading the TM1API DLL in .NET

Once you have the function signatures listed in a .NET class, you’re about halfway there. You still need to make your program load the TM1API DLL at runtime.

When I first started with the TM1 API, I used the pInvoke windows function “LoadLibrary” to load the TM1 DLLs. This was painful, as I had to load each DLL individually, including every library the TM1API was dependent on. As TM1 versions progresses, this list grew, and backward compatibility with previous version of TM1 became more difficult, as the list of required DLLs was different for each.

When I started working on the Flow Model Packager, I revised this approach and found a better way. Now I simply pInvoke the “SetDllPath” windows function, which adds the TM1API folder to the search path. This way, the program locates all dependent DLLs automatically, irrespective of how many dependent libraries there are.

Using the sample code

Loading the DLL and defining the API function calls is all taken care of in the Flow.TM1.API class provided in the sample code. To use it take the following steps.

1) Connect to TM1:

TM1API.TM1DLLPath = "[tm1 api path]";
TM1API.TM1APIInitialize();

// Initialize the user session and create a value pool
int hUser = TM1API.TM1SystemOpen();
int hPool = TM1API.TM1ValPoolCreate(hUser);

// Set the admin host to the value in the text box
TM1API.TM1SystemAdminHostSet(hUser, "[admin host]");

Log in to a given server:

int hServer = 
    TM1API.TM1SystemServerConnect(
        hPool, 
        TM1API.TM1ValString(hPool, "[server name]", 0), 
        TM1API.TM1ValString(hPool, "[username]", 0), 
        TM1API.TM1ValString(hPool, "[password]", 0)
    );

Once you have finished all your TM1 operations, you need to close the session:

TM1API.TM1SystemServerDisconnect(hPool, hServer);
TM1API.TM1ValPoolDestroy(hPool);
TM1API.TM1SystemClose(hUser);
TM1API.TM1APIFinalize();

You can look through the sample project to get a very simple example of a C# application working with the TM1 API.

Make sure you update the app.config file to include the path to your own installation of TM1.

 

Conclusion

The main barrier to entry to programming with the TM1 API is the initial set up of the API function calls, and I think this has been a limiting factor in the proliferation of custom TM1 applications.

Hopefully this sample code will get some creative developers pointed in the right direction so we can see some innovative new TM1 applications out in the wild!

You can download the sample here. If you have any issues or questions, don’t hesitate to drop me a line at team[at]flowolap.com, or make a post in the comments section below.

Back to the Future, Part II

Introduction

Most financial models are based on the fiscal year, rather than aligning with the calendar year.

In many countries, these two calendars align, but in some it does not. In Australia, for instance (my home country), we have a Jul – Jun financial year, and both New Zealand and Japan have their own, too.

Propel Planning, Flow’s budgeting and planning application, always supported the idea of changing the starting month of the fiscal year, but we made some recent improvements which led to a better date lookup cube.

I thought I’d spend some time discussing it and provide a download link for those that are interested.

What was wrong with the old one?

Admittedly, there was not much wrong with the original date lookup cube, and it could still be used for most purposes.

However, there was often a necessity to map from Fiscal Year to Calendar Year and back again to do simple lookups, and we decided we it was worth finding a more efficient way to do this since it’s so core to the Propel Planning system.

To be honest, the former date lookup cube was something I built years ago, for us in consulting where the fiscal year period would be known well in advance of starting the job. And since I started working abroad, I don’t think I ever had a single implementation where the fiscal year was not aligned with the calendar year, and never really had to consider it.

However, Propel Planning is a dynamic, generic budgeting and planning system that can be sold world-wide, so it was time to face the music!

The Approach

A fresh look

I took a different approach this time, in several ways.

Instead of creating all the data in Excel and importing it into a TM1 cube, I created all the data in TM1 rules. The logic behind this was that we can easily export the rule-based calculations, delete the rules, then import the data as static numbers, so this technique just gives us an option to keep a rule-based lookup if we ever want to.

In the previous version of Propel, changing the start month was quite a significant effort. We had a Turbo Integrator function to do all the hard work, of course, but it had quite a lot to do – updating attributes, shifting data around in the date lookup, reordering elements. We even had a hidden cube that would provide data to the final date lookup when the TI process was run.

We realized this time round the designing a smarter, more flexible date lookup cube, along with the Month and Year Period dimension would make this configuration much simpler, and render most of the tomfoolery in the TI process unnecessary.

Here is the new data structure:

Generic Month & Year Period Elements

The most important part of designing this cube was avoiding the need to rename elements in the Month and Year Period dimensions. Instead of using “Jan”, “Feb”, “Mar” for the Month element names, we instead used “M01”, “M02”, “M03”, etc. Likewise, the Year Period dimension contains the elements “Y01”, “Y02”, “Y03”, etc.

This gives us freedom to interpret each month element as a particular named month, based on the Fiscal Year. So, in Australia, we would simply set the “Display Name” attribute on “M01” to “Jul”, and similarly for subsequent months.

Now we can always assume the element name “M01” is the first month in the fiscal year.

But it’s not quite that easy. We still need to make sure our date lookup cube calculates the correct calendar month for each Fiscal month.

Fiscal Month Offset

In order to do this, we use a simple attribute in the Month dimension called “Fiscal Month Offset”. This value is referenced by the date lookup cube when calculating the calendar month. The numeric value represents the number of months to “shift” the calendar month from the fiscal month.

For instance, if our fiscal year starts in July, we need to put a 6 in this value. The date lookup rules will then equate “M01” to “M07”, which is calendar month Jul.

This way, we can still look up calendar dates if need be.

Calendar vs Fiscal Date Measures

Hence, the date lookup cube supports two sets of measures, one for lookups that occur within the fiscal year, and another for lookups to calendar years. The simplest lookup is now to the fiscal year, with no requirement to translate to calendar month and back again to do a simple lookup.

Other Improvements

To reduce the number of measures, we also removed the “Backward” and “Forward” measures. This is now done using the “Offset Month” and “Offset Year” dimensions, which now include negative numbers.

This is more articulate, and simplifies rules which rely on numeric calculations to determine the number of months to search forward or backwards in time.

Example Usage

 

To lookup 3 months in the past from a fiscal-based cube:

# Lookup the year

DB('SYS Fiscal Date Lookup',

	!GBL Year, !GBL Year Period, !GBL Month,

	'0','-3',

	'Target Fiscal Year');

# Lookup the month

DB('SYS Fiscal Date Lookup',

	!GBL Year, !GBL Year Period, !GBL Month,

	'0','-3',

	'Target Fiscal Month');

Conclusion

With the new cube structure, it is much easier to lookup months and years in the past and future in fiscal-based cubes. Concordantly, it is no more difficult to use the cube in a calendar-based design.

Now where we’re going, we… er… don’t need roads… Sorry, I know it doesn’t really fit, I just wanted to say it.