Array Types in Turbo Integrator

Introduction

I noticed a 4-year old post over on the TM1 Forum site about simulating array variables in Turbo Integrator.

It has been a persistent topic, so I went back through the thread and was surprised to see that there has been no mention of using character-delimited strings are array variables. I do this often, and it is one of the foundational concepts in libraries like Bedrock, so I thought it was worth covering in some detail.

In this article, I’ll also be using the concepts covered in my Returning Values from Turbo Integrator article, so be sure to read that too if you haven’t already.

Array Approaches

Most scripting and programming languages have support for arrays. However, unfortunately TI is not one of these, and there are many times when such a data type would be advantageous.

The thread on TM1 Forums covers a few approaches, but here are the main ones:

  1. Multiple variables
    This approach is as simple as it gets. Simple define variables with sequential names, such as V1, V2, V3, V4, etc, up to the maximum size you expect your array to be. You can iterate through these variables in a WHILE loop by incrementing an index variable, then having an IF statement for each array variable you have declared.You can immediately see how this can become highly cumbersome and unmaintainable for arrays of more than a few elements, as you need an IF statement for each one!
  2. Using a temporary dimension
    This approach is a bit smarter, as it avoids the multiple IF statements. You simply have your TI process create a temporary dimension and use that to store your array values. The element names can be strings, but if you need to store numbers, you can use elements named Item1, Item2, Item3, etc, and store the value in a numeric attribute.This can be a great approach, but can be a bit messy, and has concurrency issues. You need to create a temp dimension name that would be unique each time the TI process is run, and if you’re running many TIs frequently, finding a unique name can be difficult. You also need to handle deleting the dimension once you’re done.

    It is also possible to use a permanent dimension for this purpose and just create and destroy attributes as needed. Again, this has concurrency issues, as you need unique attribute names, and can be confusing to the user.

    In both cases, it seems a very heavyweight solution for something as simple as arrays, and might not perform as well as a solution that avoids TM1 objects.

    You also need to worry about the maximum number of elements in the item dimension, as that forms the upper limit of the array.

  3. Using a temporary cube
    This approach is similar to the above, but the values are stored in a cube instead of element names or attributes. This has all the inherent caveats and benefits as the dimension approach, but is slightly more visible and accessible to a user, which can be an advantage in some cases.
None of these solutions are perfect, but they will serve their purpose. There is a fourth option, however, which is the focus of this article.

Using Character-Delimited Strings

The concept behind this is very simple. Instead of using TM1 objects to store arrays of values, we use a string separated by a particular character. A simple example would be “V1,V2,V3″.

In Turbo Integrator, it’s possible to split this string using SCAN to find the locations of the commas and SUBST to split the list into its component values.

This does not use any TM1 objects, and is appealing in that it’s a native TI solution. It should also be relatively fast, as SCAN and SUBST are very simple functions and well-optimized for performance. It is also very easy to pass values between processes using the technique detailed in my previous article Returning Values from Turbo Integrator.

There are a few drawbacks:

  • There is an upper limit to string length in TI, which you can hit with large lists. For TM1 10, the limit is 65k characters.
  • Storing numeric values involves a conversion from string to number, which can affect performance
  • Coding such an approach is cumbersome, and you often find yourself writing the same or similar WHILE loops, which clutter up your TI processes
For small lists, the first two points are not a major issue. However, the third point never goes away, and writing the same code over and over makes your processes hard to maintain and error-prone. Since debugging is very difficult in TI, you don’t want to write more code than is absolutely necessary or you can quickly find yourself chasing your tail!
To mitigate this, we need to use a best practice approach to re-usable code. This can be done in TI by creating a series of well-defined library processes. Combining this with the concept of returning values, we can encapsulate all of our array processing tasks in our library processes, avoiding the need to repeat the code patterns and making your code maintainable once more.

An Array Processing Library

Rather than leave the task of writing library functions as an exercise for the reader, I decided to take a shot at writing it myself.
The Flow array processing library contains a number of useful functions to assist in using delimited lists as arrays:
  • ItemRead: reads an item from the array at a specified index
  • ItemCount: Returns the number of items in the list
  • ItemAdd: Adds an item to the list at the specified index
  • ItemRemove: Deletes an item from the list at the specified index
  • ItemReplace: Update the value at a specified index with a new value
  • ItemFind: Locate a value in the list and return the index of the value, if found
Using these basic functions, one can perform most array functions without worrying about the implementation details. I make no warranties about the quality of the library, but it is, at least, a great starting point for a more robust implementation.
It has the following features:
  • Value quoting: What is the value in the list wants to include the character that is being used as a delimiter? The library supports this by supporting quoting. An example would be: “1,2,|3,4|,5″. If you specify the bar character (|)as your quote character, the library will retrieve the third value in the list as “|3,4|”.
  • Opening and closing quote characters: The library supports specifying an opening and closing quote character. This allows you to specify the above example as “1,2,[3,4],5″, which is much more readable. If the closing quote character is not specified, it is assumed to be the same as the opening quote character.
Desired features:
  • Escaping quote characters: Currently the library does not support escaping the quote characters within the value. This means you cannot use either of your quote characters in the value, or it you will get unpredictable results. Ideally, the library would detect quote characters within the list values and escape them automatically, and un-escape them when reading the back out.
  • Error Handling: At present the error detection and handling in the library is rudimentary. If a list is badly formed, it would be difficult to detect and resolve in code.
  • Performance: The current implementation reads the entire list multiple times, which can become exponentially slower in certain implementation patterns. Ideally, the library would support ways to optimize algorithms and perhaps a caching option. However, as this library is designed for small lists, the law of diminishing returns may apply to such features.

Using the library

A sample process is included to demonstrate the basic functions of the library.

The following code:

pItemList = 'A,B,C,D,E,F';
pOutputFolder = 'C:\Temp';

NumericGlobalVariable('OutputItemCount');
StringGlobalVariable('OutputItem');
StringGlobalVariable('OutputItemList');
NumericGlobalVariable('OutputItemIndex');
NumericGlobalVariable('OutputItemLocation');
NumericGlobalVariable('OutputSearchItemIndex');

vProcessName = 'Flow.String.List.Sample1';
vOutputFile = pOutputFolder | '\' | vProcessName | '.Output.txt';

ASCIIOUTPUT(vOutputFile, 'Working on list [' | pItemList | ']');

ExecuteProcess('Flow.String.List.ItemCount', 'pItemList', pItemList);
vItemCount = OutputItemCount;
ASCIIOUTPUT(vOutputFile, 'Item count: ' | TRIM(STR(vItemCount, 4, 0)));

ExecuteProcess('Flow.String.List.ItemAdd', 'pItemList', pItemList, 'pItemIndex', 4, 'pNewItem', 'Added');
ASCIIOUTPUT(vOutputFile, 'Item added at index 4: [' | OutputItemList | ']');

ExecuteProcess('Flow.String.List.ItemRemove', 'pItemList', OutputItemList, 'pItemIndex', 2);
ASCIIOUTPUT(vOutputFile, 'Item removed at index 2: [' | OutputItemList | ']');

ExecuteProcess('Flow.String.List.ItemReplace', 'pItemList', OutputItemList, 'pItemIndex', 5, 'pNewItem', 'Replaced');
ASCIIOUTPUT(vOutputFile, 'Item replaced at index 5: [' | OutputItemList | ']');

ASCIIOUTPUT(vOutputFile, 'Finding index of item "F"...');
ExecuteProcess('Flow.String.List.ItemFind', 'pItemList', OutputItemList, 'pSearchItem', 'F');
ASCIIOUTPUT(vOutputFile, 'Index of search item: ' | TRIM(STR(OutputSearchItemIndex, 15, 0)) );

ASCIIOUTPUT(vOutputFile, 'Finding index of item "G"...');
ExecuteProcess('Flow.String.List.ItemFind', 'pItemList', OutputItemList, 'pSearchItem', 'G');
ASCIIOUTPUT(vOutputFile, 'Index of search item: ' | TRIM(STR(OutputSearchItemIndex, 15, 0)) );

ASCIIOUTPUT(vOutputFile, 'Listing all current items...');
vCurrentItemIndex = 1;
vCurrentItem = '';
WHILE(vCurrentItemIndex <= vItemCount);
	ExecuteProcess('Flow.String.List.ItemRead', 'pItemList', OutputItemList, 'pItemIndex', vCurrentItemIndex);
	vCurrentItem = OutputItem;
	ASCIIOUTPUT(vOutputFile, '[' | TRIM(STR(vCurrentItemIndex, 4, 0)) | ']' | ' = ' | vCurrentItem);
	vCurrentItemIndex = vCurrentItemIndex + 1;
END;

ASCIIOUTPUT(vOutputFile, 'Listing all original items...');
vCurrentItemIndex = 1;
vCurrentItem = '';
WHILE(vCurrentItemIndex <= vItemCount);
	ExecuteProcess('Flow.String.List.ItemRead', 'pItemList', pItemList, 'pItemIndex', vCurrentItemIndex);
	vCurrentItem = OutputItem;
	ASCIIOUTPUT(vOutputFile, '[' | TRIM(STR(vCurrentItemIndex, 4, 0)) | ']' | ' = ' | vCurrentItem);
	vCurrentItemIndex = vCurrentItemIndex + 1;
END;

Yields the following output file:

Working on list [A,B,C,D,E,F]
Item count: 6
Item added at index 4: [A,B,C,Added,D,E,F]
Item removed at index 2: [A,C,Added,D,E,F]
Item replaced at index 5: [A,C,Added,D,Replaced,F]
Finding index of item "F"...
Index of search item: 6
Finding index of item "G"...
Index of search item: 0
Listing all current items...
[1] = A
[2] = C
[3] = Added
[4] = D
[5] = Replaced
[6] = F
Listing all original items...
[1] = A
[2] = B
[3] = C
[4] = D
[5] = E
[6] = F

As you can see, the sample code is much cleaner and more maintainable than more ad-hoc implementations of the same technique.

Conclusion

There is no perfect solution to simulating array variables in Turbo Integrator, but there are some work-arounds that can work in various scenarios.

Hopefully the library I have provided will help other developers get started using the character-delimited string technique, and will at least serve as an example of reusability and maintainability in Turbo Integrator processes.

Flow.String.List.zip (8.03 kb)

.NET C# TM1API starter kit.

Introduction

One of the most powerful (and possibly overlooked) features of TM1 is that it supports a fully-featured programming API (Application Programming Interface).

This means that virtually any function supported by the server is exposed to external programming environments. Or, put more simply: you can write your own TM1 application, and your imagination is the limit!

Working with the TM1 API in Visual Basic is pretty easy, as the functions are available anytime the TM1 add-in is loaded. This is why a lot of developers gravitate toward working this way, as it is familiar and quick to get started.

Unfortunately, it also comes with the limitation of being hosted in the Excel environment, and cannot produce a stand-alone Windows or Web application.

Recent versions of TM1 include a .NET API that you can use in your applications, but this API does not support all the functions of the native COM API. It was initially developed to support the requirements of TM1Web and supports mainly read-only operations.

Microsoft’s .NET framework software development platform has grown the past decade to become one of the most comprehensive framework libraries out there. It’s free to use, supported by Microsoft, and is a great choice for quickly creating native Windows applications.

But how can we get the best of both worlds?

Note: this article assumes a basic level of C# programming capability and access to a 2010 version of Microsoft Visual Studio.

The TM1 COM API in .NET

It’s well-known that you can call COM dynamic link libraries from .NET. However, the protocol for doing so can be quite difficult to implement, especially when the DLL in question is not documented in an easy-to-translate way.

In .NET you need to know the underlying names and entry points of every single function call, and then map the COM data types to equivalent .NET data types.

You can analyze the tm1api.dll file using a tool like Dependency Walker, which gives you information about each function contained in the DLL. By default, the parameter and return types are not listed in any useful form, but if you right click in the window and select “Undecorate C++ Functions”, you’ll see that the output becomes a whole lot more interesting.

Here is an example output from Dependency Walker:

As you can see, all the functions are listed out with their associated function names and signatures. However, there are a lot of functions, so typing each one out would take a long time.

TM1 includes a full list of function signatures, but this is either a BAS file or a C header file. Those don’t help us with C#, but it turns out it’s pretty easy to convert this BAS to C# — I used an online VB.NET->C# converter, changed the longs to ints and switch any function with a _VB suffix to the C version. Since I’ve already gone through the process, you can use the one included in the sample package.

Disclaimer: I make no warranty or guarantee that the included code is complete, correct or bug free. Use it at your own risk.

Loading the TM1API DLL in .NET

Once you have the function signatures listed in a .NET class, you’re about halfway there. You still need to make your program load the TM1API DLL at runtime.

When I first started with the TM1 API, I used the pInvoke windows function “LoadLibrary” to load the TM1 DLLs. This was painful, as I had to load each DLL individually, including every library the TM1API was dependent on. As TM1 versions progresses, this list grew, and backward compatibility with previous version of TM1 became more difficult, as the list of required DLLs was different for each.

When I started working on the Flow Model Packager, I revised this approach and found a better way. Now I simply pInvoke the “SetDllPath” windows function, which adds the TM1API folder to the search path. This way, the program locates all dependent DLLs automatically, irrespective of how many dependent libraries there are.

Using the sample code

Loading the DLL and defining the API function calls is all taken care of in the Flow.TM1.API class provided in the sample code. To use it take the following steps.

1) Connect to TM1:

TM1API.TM1DLLPath = "[tm1 api path]";
TM1API.TM1APIInitialize();

// Initialize the user session and create a value pool
int hUser = TM1API.TM1SystemOpen();
int hPool = TM1API.TM1ValPoolCreate(hUser);

// Set the admin host to the value in the text box
TM1API.TM1SystemAdminHostSet(hUser, "[admin host]");

Log in to a given server:

int hServer = 
    TM1API.TM1SystemServerConnect(
        hPool, 
        TM1API.TM1ValString(hPool, "[server name]", 0), 
        TM1API.TM1ValString(hPool, "[username]", 0), 
        TM1API.TM1ValString(hPool, "[password]", 0)
    );

Once you have finished all your TM1 operations, you need to close the session:

TM1API.TM1SystemServerDisconnect(hPool, hServer);
TM1API.TM1ValPoolDestroy(hPool);
TM1API.TM1SystemClose(hUser);
TM1API.TM1APIFinalize();

You can look through the sample project to get a very simple example of a C# application working with the TM1 API.

Make sure you update the app.config file to include the path to your own installation of TM1.

 

Conclusion

The main barrier to entry to programming with the TM1 API is the initial set up of the API function calls, and I think this has been a limiting factor in the proliferation of custom TM1 applications.

Hopefully this sample code will get some creative developers pointed in the right direction so we can see some innovative new TM1 applications out in the wild!

You can download the sample here. If you have any issues or questions, don’t hesitate to drop me a line at team[at]flowolap.com, or make a post in the comments section below.

Back to the Future, Part II

Introduction

Most financial models are based on the fiscal year, rather than aligning with the calendar year.

In many countries, these two calendars align, but in some it does not. In Australia, for instance (my home country), we have a Jul – Jun financial year, and both New Zealand and Japan have their own, too.

Propel Planning, Flow’s budgeting and planning application, always supported the idea of changing the starting month of the fiscal year, but we made some recent improvements which led to a better date lookup cube.

I thought I’d spend some time discussing it and provide a download link for those that are interested.

What was wrong with the old one?

Admittedly, there was not much wrong with the original date lookup cube, and it could still be used for most purposes.

However, there was often a necessity to map from Fiscal Year to Calendar Year and back again to do simple lookups, and we decided we it was worth finding a more efficient way to do this since it’s so core to the Propel Planning system.

To be honest, the former date lookup cube was something I built years ago, for us in consulting where the fiscal year period would be known well in advance of starting the job. And since I started working abroad, I don’t think I ever had a single implementation where the fiscal year was not aligned with the calendar year, and never really had to consider it.

However, Propel Planning is a dynamic, generic budgeting and planning system that can be sold world-wide, so it was time to face the music!

The Approach

A fresh look

I took a different approach this time, in several ways.

Instead of creating all the data in Excel and importing it into a TM1 cube, I created all the data in TM1 rules. The logic behind this was that we can easily export the rule-based calculations, delete the rules, then import the data as static numbers, so this technique just gives us an option to keep a rule-based lookup if we ever want to.

In the previous version of Propel, changing the start month was quite a significant effort. We had a Turbo Integrator function to do all the hard work, of course, but it had quite a lot to do – updating attributes, shifting data around in the date lookup, reordering elements. We even had a hidden cube that would provide data to the final date lookup when the TI process was run.

We realized this time round the designing a smarter, more flexible date lookup cube, along with the Month and Year Period dimension would make this configuration much simpler, and render most of the tomfoolery in the TI process unnecessary.

Here is the new data structure:

Generic Month & Year Period Elements

The most important part of designing this cube was avoiding the need to rename elements in the Month and Year Period dimensions. Instead of using “Jan”, “Feb”, “Mar” for the Month element names, we instead used “M01″, “M02″, “M03″, etc. Likewise, the Year Period dimension contains the elements “Y01″, “Y02″, “Y03″, etc.

This gives us freedom to interpret each month element as a particular named month, based on the Fiscal Year. So, in Australia, we would simply set the “Display Name” attribute on “M01″ to “Jul”, and similarly for subsequent months.

Now we can always assume the element name “M01″ is the first month in the fiscal year.

But it’s not quite that easy. We still need to make sure our date lookup cube calculates the correct calendar month for each Fiscal month.

Fiscal Month Offset

In order to do this, we use a simple attribute in the Month dimension called “Fiscal Month Offset”. This value is referenced by the date lookup cube when calculating the calendar month. The numeric value represents the number of months to “shift” the calendar month from the fiscal month.

For instance, if our fiscal year starts in July, we need to put a 6 in this value. The date lookup rules will then equate “M01″ to “M07″, which is calendar month Jul.

This way, we can still look up calendar dates if need be.

Calendar vs Fiscal Date Measures

Hence, the date lookup cube supports two sets of measures, one for lookups that occur within the fiscal year, and another for lookups to calendar years. The simplest lookup is now to the fiscal year, with no requirement to translate to calendar month and back again to do a simple lookup.

Other Improvements

To reduce the number of measures, we also removed the “Backward” and “Forward” measures. This is now done using the “Offset Month” and “Offset Year” dimensions, which now include negative numbers.

This is more articulate, and simplifies rules which rely on numeric calculations to determine the number of months to search forward or backwards in time.

Example Usage

 

To lookup 3 months in the past from a fiscal-based cube:

# Lookup the year

DB('SYS Fiscal Date Lookup',

	!GBL Year, !GBL Year Period, !GBL Month,

	'0','-3',

	'Target Fiscal Year');

# Lookup the month

DB('SYS Fiscal Date Lookup',

	!GBL Year, !GBL Year Period, !GBL Month,

	'0','-3',

	'Target Fiscal Month');

Conclusion

With the new cube structure, it is much easier to lookup months and years in the past and future in fiscal-based cubes. Concordantly, it is no more difficult to use the cube in a calendar-based design.

Now where we’re going, we… er… don’t need roads… Sorry, I know it doesn’t really fit, I just wanted to say it.

Returning values from Turbo Integrator processes

Introduction

The two cornerstone features of any generic programming platform are encapsulation and reuse. These related ideas are simply the ability to write a piece of code once and then use it many times.

Being able to call processes within other processes is a useful technique for performing a set repeated task, but what if you would like the two processes to interact and communicate?

“Hey”, I hear you say, “TI has that covered, doesn’t it?” Well, sort of!
Yes, the ability to call a process from within another can allow processes to act like functions or procedures, and yes, they do accept parameters, so the calling process can modify the behavior of child process.
But what about return values? What if you want to build a utility function that performs some processing, and returns a value to the calling process?
This post will discuss various ways to tackle this very problem.

Techniques

Global temporary cube

Since cubes are globally available to all TI processes, it’s a natural first choice in storing global variables that can act as return values.

You would probably take the following steps:

  • The calling process would call a initialization function, that would receive a parameter to tell it what particular “global variable” measure to create.
  • The calling process would then call the child process, passing a parameter containing the name of the “global variable” measure.
  • The child process would do it’s processing, and write it’s result to the parameter cube using the appropriate CellPut function.
  • The calling process would read this value and continue.
Immediately, you can probably see issues with this approach. For one, it’s pretty messy, and creates annoying control cubes, and elements with temporary names. Sure, you can hide these with security, but it still clutters up the view for administrators.
Secondly, it’s quite inefficient. You could potentially have predetermined elements for your return values, but then you have to worry about concurrent processing and if a user mistakenly interferes with values in the cube using Perspectives or Excel. The speed of writing to and reading from a cube is probably not optimal for such a core requirement.

User-defined Global Variables

TI supports the idea of user-defined global variables. These are the same as local variables, but are available to all the functions being called within the call stack of the parent process or chore group. This is quite useful for the purposes of return values, and avoids the mess of using cubes.

Here’s an example:

StringGlobalVariable('sReturnValue');
ExecuteProcess('GetReply', 'pStatement', 'Hello');
sReply = sReturnValue;

In the child process, you could do the following:

StringGlobalVariable('sReturnValue');

IF(pStatement @= 'Hello');
    sReturnValue = 'Goodbye';
ENDIF;

ProcessBreak;

The end result of this would be your variable, ‘sReturnValue’ would contain the string ‘Goodbye’.

The limitation of this approach is mostly dictated by the maximum length of string values in Turbo Integrator. In the current version, TM1 10.1, this is 65536 characters, which is ample for many applications. Just be wary of using this technique in earlier versions of TM1, because prior to 9.4 and the Unicode upgrade, the maximum length of a string was 256 characters.

Update: The upper limit of a string passed to ASCIIOUTPUT is 64K. However, a string variable appears to be able to have a larger size, depending on the amount of available RAM. On my laptop with 4GB RAM installed, I was able to create a string variable of around 512K. Anything larger and my entire system would crash!

You’re also well-advised to grab the value of the global variable immediately after the ExecuteProcess call and assign it to a local variable. This is simply so you don’t get confused if making multiple calls to other processes which may use the same global variable name as others called in your parent process.

Conclusion

TI Libraries like Bedrock are great for processes that perform stand-alone operations, but do not support the idea of reusable processing functions that return a value to their caller.

These two a simple techniques can be extremely powerful for creating generic, reusable processes. Start using it in your development, and you’ll begin to see what I mean!

Happy modelling!

Flow’s Date Lookup cube – Back to the Future!

Introduction

Firstly, sorry there hasn’t been a blog post for a while. Sometimes work commitments don’t play nicely with good intentions.

I thought I’d spend a bit of time talking about the advantages of using the SYS Date Lookup cube bundled with the Flow Server Manager. This cube has been invaluable to me during past implementations and is worth considering as a starting point for any project.

Time Travel for TM1 Rules?

Sometimes your rules and TI processes need to time travel. There are many scenarios where this is necessary. but I’m sure if you’ve got a few projects under your belt, you won’t need me to provide an example.

TM1 provides some basic calendar and time functions to deal with calculations that need to factor in the fourth dimension of our physical reality. However, they fall well short of those offered in Excel and other programming environments, and often you just can’t find an efficient way to do what you want.

How many times have you used the expression:

SUBST(
    'JanFebMarAprMayJunJulAugSepOctNovDec',
    (
        MONTH(
            DATE(['Date'] - 21916, 1)
        ) - 1
    ) * 3 + 1,
    3
);

?

Never? Oh, ok, maybe it’s just me! Anyhow, experience and logic will show you that it’s almost always better to look up such a value in a cube or attribute than it is to write a convoluted date/string combo like this.

The SYS Date Lookup cube won’t save you for the above example (you’d need a dimension with month numbers and a month name attribute for that), but is instead designed specifically to help with looking backward and forward in time.

Developers need to do this very often when creating rules that accumulate values over time, such as depreciation calculations, and, although you can use rules that look backward one index in the dimension, this approach is tricky when you need to look to a previous year (especially if Year is a separate dimension), and falls apart completely if someone changes the element order in the period dimension.

The Flow “SYS Date Lookup” cube

A seasoned developer with spend a fair bit of time considering their period dimension structure at the beginning of the project. With this, I think most of us probably also consider a date lookup cube, even if we don’t design it explicitly.

When starting on a project a few years back, I decided to create the ultimate date lookup cube, one which I could re-use for new projects. What I came up with formed the basis of the cube you find in the Flow Server Manager.

The first thing to note is that the cube is calendar based. If you use a fiscal year dimension, such as “Plan Year 1″, “Plan Year 2″, like we do in Propel Planning, you need to convert that to the correct calendar year.

Once you have a calendar year and month, you can use the SYS Date Lookup cube to look forward or backward any number of months up to 20 years from the current date.

Overkill? Probably, but at least you’ll never have to build one from scratch again!

How to get it

Simply fire up the Flow Server Manager (or if you don’t have it, get it here).

  1. Click the “Create New Data Folder” button.
  2. A dialog will appear asking you what options you would like to enable for the new TM1 data folder. Make sure you have “Date Lookup” checkbox checked.
  3. Type a server name and select the folder you would like to create the data folder in, then click the “Create” button.
This will create a new TM1 data folder with the SYS Date Lookup cube already included. If you want to include it in an existing model, you can take the files from this data folder and copy them manually into the target data folder.

How to use it

To look up a date using the SYS Date Lookup cube, you need to provide 5 parameters.

  1. Source Year
  2. Source Month
  3. Number of Years
  4. Number of Months
  5. Measure (forward or backward year or month)
For example, if you wanted to look 12 years and six months into the past, you’d use the following formulae:
[‘Year in the Past’] = N: DB(‘SYS Date Lookup’, !Year, !Month, ’12’, ‘6’, ‘Backward Year’);
[‘Month in the Past’] = N: DB(‘SYS Date Lookup’, !Year, !Month, ’12’, ‘6’, ‘Backward Month’);
This may look a little complex at first, but it is far simpler and more efficient than trying to write a rule to calculate it on the fly.

Conclusion

Using a date lookup cube is often necessary for efficient rule and TI development.

It’s definitely worth thinking about this aspect early in the project, and if you have matching year and month dimension, why not give yourself a little shortcut by using the Flow Date Lookup cube?

It’ll get you right into the future and back again safely.

TM1 Development Methodologies & Tools

Introduction

The classical way of developing a TM1 Server seems to be to go in person to a customer site and develop on their infrastructure.

There are often many good reasons for this, mainly involving security and availability, but I have not worked this way for a long time.

I thought I’d post about my personal development methodologies and tools, and how they might help other developers in their work.

Local and Off-site Development

It’s first worth mentioning that I come from a software development background and have worked in various projects with different software development life-cycles, including structured Microsoft Consulting approach, “agile” development, and many variants in between.

This heavily affects my view of TM1 development, because I sometimes see practices that terrify me as a disciplined programmer.

I’ve seen teams of TM1 consultants developing individual models on their personal laptops, then trying to merge them together and integrated them in a completely unstructured way.

After seeing this, I certainly understand the appeal of the centralized dev server model.

However, I prefer a localized and often off-site development model for various reasons. It allows me to work on many projects simultaneously, stops frequent travel from interfering with productivity, can keep me free of untimely distractions, and just generally suits my way of working.

I won’t attempt to sell the approach here, as my focus is on tools and methods you can use if you happen to share my view.

My Process

Overview

Usually, the first thing I do when I start the development phase of a project is to find out whether it’s possible to work at least partially off-line.

If I find there’s a problem with sensitive data, I’ll just invent some test data in the initial development stages.

This also helps by providing independent, repeatable test cases that can be utilized for unit testing and possibly later for UAT.

I then work on a local TM1 instance on my laptop, whether on-site or not, which I have set up with all my standard tools and applications.

If data ever gets too big for the RAM on my machine, I’ll just scale back the data volumes.

Merging Changes

When the time comes, I’ll update the development server on the client site. I do this by performing an automated file compare and deciding which objects are new and should be included, and which objects should be left alone.

It’s easy to make this decision, as the file compare tool shows you exactly what has changed and which version is more recent. Most even allow you to do a visual comparison of text files, which is very handy for RUX and PRO files.

The tool I use for this is an open source one called “WinMerge”. You can check it out here.

Working in a Team

So, you might be thinking, this is all well and good if you’re working alone or on a very small team, but what about large collaborative projects?

Certainly, larger teams provide a challenge, but nothing that can’t be overcome.

Often in this case, it’s useful to “stub” the design. This involves creating all the cubes and dimensions first, and leaving out the attributes, rules, and TI processes. That way each team member knows what other cubes will be called and won’t make the mistake of creating any objects with the same name.

Naming conventions often come into play here, too. I will often turn up to a project and make it my first order of business to distribute a standard naming convention document I have been using for years. You might prefer a different convention, or the customer might mandate one, but the important part is everyone understands it and sticks to it.

I’ve attached a sample document I used on a project years ago.

Revision Control

One concept I find useful for projects with larger teams is revision control.

This is a discipline well known to computer programmers that not only allows your team to keep a centralized copy development files (in this case TM1 server files from your data folder), but keeps track of all previous versions, who changed them and why.

The Basics

The idea is to keep a central repository of all files in the project and allow developers to “check out” the files to their local computer and work locally.

Once they have made changes, they can choose to “commit” their files to the repository, or to “revert” the changes they have made. If other developers are also making changes, developers perform an “update” to ensure they have the latest committed files.

It has other benefits, such as allowing the team to “tag” releases, so they can always get back to a particular version of the project, and for an individual developer to “branch” development (split it), for experimentation or creating an alternate version.

There are many other features and benefits to using revision control, which you can find by doing a Google search on the topic.

If a revision control system detects that two users have changed the same file, it does not allow one user to overwrite another’s work. It gives a “conflict” notification and allows the users to deal with that conflict.

For text files, you can often “merge” the changes made to both files, or, since revision control tells you who made the conflicting change, you can simply get in contact with the other developer and negotiate which file is correct.

But for TM1?

It may seem counter-intuitive to use a revision control system for TM1, as many of the files are not text-based, but I have found it very useful. Sure, you lose the merge functionality for non-text files, but you can still often perform a Winmerge “diff” and get enough information to work out what has changed and how to resolve it.

When dealing with TM1, you can exclude the TM1 control files from the revision control repository, as most have an “ignore” feature. This is important, because TM1 updates these files each time the server is started up, so they will always register as modified.

The main drawback I have found is getting team members to adopt it, as it does require some process and training to be used effectively.

Software Options

The tool we also use for Flow development is called Subversion, with the Visual SVN Server front-end. It is an open-source version control system that supports all the features thus described, and a nice MMC snap-in interface for configuring the server and permissions.

There are also various front-ends for Subversion. The one we use is Tortoise SVN, which integrates with the Windows shell and provides icon overlays to indicate if a file is up-to-date or has modifications.

Conclusion

Using some of these techniques, processes and tools provides a more flexible project environment by making it easier for developers to work on local copies of their models and providing a framework to merge the changes back to the central server.

Of course, many of the free tools in the Flow Toolbox Suite have the same goal and can automate some of these concepts even further, to make TM1 development even easier!

If you haven’t checked out the tools yet, you can get them here.

I hope it has been an interesting and useful discussion. If you have any questions, feel free to comment below.

TM1 Naming Convention Sample.pdf (105.55 kb)

An Item-based Approach – Quick Update

Working on the Flow Model Packager, I often have to make random changes to a model so I can test the model compare and deploy functionality.

Yesterday, I was using the item-based approach sample server as my test case, and decided to try to implement a more real-world change than I usually do (most of the time, I just add and delete things randomly for testing purposes).

This led to the idea of creating a new reporting cube that would ignore Category and Subcategory, but organize payments by “Payment Range”.

Using the item-based approach proved very flexible in this case, as I was able to add a quick rule and feeder to the input cube and create an entirely new cube which would report on information that wasn’t previously captured.

It also brought up some issues regarding re-triggering numeric feeders in TM1 10, which I will cover in an upcoming article.

For now, I thought I’d share the results for comparison — the model is attached. Happy modelling!

StringFeederSampleRange.zip (2.92 mb)

An Item-based Approach to TM1 Model Design

Introduction

James and I have been working together for some time now, and we have evolved toward what I would class as a non-typical design approach in TM1. For the sake of simplicity, I refer to this as an “item-based approach”.

It is very prominent in our implementation of Propel Planning, and underpins a lot of the power of that particular product.

I thought it worthy of some discussion, as it has many advantages, and a few gotchas, that are worth considering in any implementation you might be involved with.

The Approach

The item-based design approach has the goal of allowing data input in a flat, tabular format without giving up the browsability and analytic capability of highly dimensional cubes. It also separates input from reporting in a very elegant fashion.

You begin with an input cube, which should have only the basic dimensions, plus a measures dimension to represent the columns of the input table, and item dimension to represent an arbitrary number of rows.

The measures dimension will include many string elements which map to other dimension elements. Thanks to the picklist feature in TM1 9.5+, these lists can even be restricted to ensure invalid entry does not occur.

A separate reporting cube is then created that maps the string elements to actual dimensions, for reporting and analysis, usually via rules. This cube has no data entry and populates itself entirely from data in the input cube. You could also use TI to populate such a cube without too much trouble, for implementations that have higher data volumes.

I call it item-based, because this approach naturally requires an item dimension, with arbitrary names. Most of the time we just call the elements “Item 1″, “Item 2″ etc, up to a maximum amount. Because this maximum is imposed, it is important to keep the efficiency of the model from being affected by the number of elements in the item dimension. More about that below.

Advantages

There are many advantages to such an approach.

Data-entry simplicity

New users of TM1 are often uninitiated, and, dare I say it, sometimes under-prepared by standard TM1 training courses, to understand the full advantages of a multi-dimensional database model. It doesn’t matter what you do, some users have spent way too much time in Excel or Access and simply think in terms of tables.

And why should they bother? Many of these users are simply data contributors, and do not have any interest in performing extensive analysis on their data.

The flat input approach allows such users to contribute their data in a way that makes sense to them.

It also allows them to adjust manual inputs and correct errors without cutting the data from one intersection and pasting it in another, an operation which can be error prone and, let’s face it, slightly buggy in the TM1 Perspectives cube viewer, and difficult in the TM1 Contributor front-end.

Maintainability & Agility

TM1 implementations are naturally agile and flexible. Developer with an IT background like myself might fight against it, and try to impose strict, inflexible Business Requirements and a rigid change request process to protect against scope creep, but that really undermines one of TM1’s key advantages in the market place: agility.

Imagine a retail sales model, which has Region, Sales Rep and Distributor as data points of interest. Sales reps and other users contribute data from the field using their laptops.

In a typical TM1 design, you’d create a cube with Region, Distributor and Product as dimensions. The input form would ask the user to select elements from each of those 3 dimensions and would write the sales/inventory data to the intersection of the elements chosen.

All is good, and the managers and finance staff can browse the cube and get the insight they need.

However, imagine, after months of data has been collected, someone in head office decides they would like to also track the data by Customer Type. The data already exists in the point of sale system, as each customer is tracked by credit cards and loyalty cards they use when making a purchase.

With your typical design, you don’t have a lot of choice, but to redesign from scratch and create a new cube with the additional dimension. You might choose to keep the existing cube for backward compatibility, in which case you’d have two sources of the same data, which could lead to synchronization issues since the original data is manually contributed from Sales Reps in the field.

It’s your basic nightmare, and if you were halfway through the implementation, you’d probably tell your customer that it’s a change in scope and that it would have to be left to phase 2.

With an item-based approach, you don’t have these issues. You can take the new data from the POS systems, import the Customer Type field via TI (while creating the new Customer Type dimension on the fly), then update your reporting cube and rules.

Yes, you still have to do some basic redesign, but there is no requirement for a complex and error-prone data migration.

Contributor & Insight-friendly

TM1 Contributor (or “Applications” as it’s now known) and Cognos Insight, are great front end tools for data contribution. They are a little weak, however, when it comes to customizing views to be friendly for the end-user. A highly dimensional input cube forces the view designer to choose between unworkably large grids or many laborious title element selectors which make cutting and pasting data difficult.

A flat, item-based input cube is much simpler to work with, supports multi-level cut and paste, and presents itself in a more logical fashion for quick data input. String values can be typed in as well as selected from a list, then copied down as necessary.

Downsides and Gotchas

Performance

If you’re not careful, this design approach can tempt you into inefficient rules and over-feeding. Performance can suffer with large data volumes.

However, with better design and a clean rule-based approach this can be avoided. Over-feeding is not necessary and rules can be structured logically and efficiently.

As always, TM1 has it’s quirks, but once you understand the gotchas associated with this design approach, they are easy to avoid or work around.

I’m planning several follow-up articles that will go through these issues in detail, and how to make sure they don’t have you pulling your hair out.

Complexity

Rules in this design approach can appear more complex and be harder for another developer to understand. I have first hand experience of handing over such a design to very capable developers and having them screw up their noses and replace my cubes with a more standard TM1 design.

I believe this is partially a cultural issue, as TM1 is taught in a particular way, and that has become accepted as “correct”. Once a developer adjusts to this kind of thinking, it’s actually very difficult to go back!

Obviously well-formatted rules and code comments can go a long way to alleviating this issue also.

Limitations

There is a natural limitation imposed by the item-based approach, and that is the number of elements in the item dimension forms a maximum number of “slots” for data input.

To avoid the situation where a user does not have enough “slots” to input their data, a developer might be tempted to include a large number of elements in their item dimension, and, if the rules and feeders are designed poorly, this could lead to poor performance.

However, a well designed cube won’t need a lot of input slots, as the average person is not able to navigate, or even usefully perceive, thousands of rows of data!

In our retail sales example above, there may be many sales items entered, and at first glance, it may appear to require a form with thousands of visible items shown. With a bit of thought, it usually possible to group input tasks meaningfully so that only a useful number of items need to be shown for the current input task. For instance, the Sales Rep could enter only the data for the particular store they are visiting, as they most likely wouldn’t be entering data for several stores simultaneously.

And, either way, a more dimensional approach does not mitigate this problem either!

Conclusion

With a bit of planning and thought, an item-based approach to TM1 development offers many advantages and rewards.

My follow up articles will be based on a simplified example, which is attached to this post for you to download and examine. The example is built and tested in TM1 v10, with the ForceReevaluationOfFeedersForFedCellsOnDataChange set to “T”.

Feel free to leave comment, criticisms, or unfettered praise in the comments section below!

And yes, that is the name of the property!

StringFeederSample.zip (2.98 mb)

Welcome

Welcome to the Flow Blog!

We’re just getting started here, but we’re very excited to be launching and getting our ideas out to the world.

James and I first had the idea for Flow around 2 years ago, and have been working solidly since then to make the idea a reality. We hope you find value in what we have produced, but sit tight, there is a lot more in the pipeline!

If you have the Flow Server Manager installed, it should automatically notify you of any new blog posts here via RSS, or if you have your own RSS reader, you can  simply use that to subscribe to the blog.

We’ll be putting up software update announcements, stories, articles and tutorials.

If you’d like to suggest some topics for articles, feel free to post in the comments below.

Happy modelling!