TM1 Development Methodologies & Tools

Introduction

The classical way of developing a TM1 Server seems to be to go in person to a customer site and develop on their infrastructure.

There are often many good reasons for this, mainly involving security and availability, but I have not worked this way for a long time.

I thought I’d post about my personal development methodologies and tools, and how they might help other developers in their work.

Local and Off-site Development

It’s first worth mentioning that I come from a software development background and have worked in various projects with different software development life-cycles, including structured Microsoft Consulting approach, “agile” development, and many variants in between.

This heavily affects my view of TM1 development, because I sometimes see practices that terrify me as a disciplined programmer.

I’ve seen teams of TM1 consultants developing individual models on their personal laptops, then trying to merge them together and integrated them in a completely unstructured way.

After seeing this, I certainly understand the appeal of the centralized dev server model.

However, I prefer a localized and often off-site development model for various reasons. It allows me to work on many projects simultaneously, stops frequent travel from interfering with productivity, can keep me free of untimely distractions, and just generally suits my way of working.

I won’t attempt to sell the approach here, as my focus is on tools and methods you can use if you happen to share my view.

My Process

Overview

Usually, the first thing I do when I start the development phase of a project is to find out whether it’s possible to work at least partially off-line.

If I find there’s a problem with sensitive data, I’ll just invent some test data in the initial development stages.

This also helps by providing independent, repeatable test cases that can be utilized for unit testing and possibly later for UAT.

I then work on a local TM1 instance on my laptop, whether on-site or not, which I have set up with all my standard tools and applications.

If data ever gets too big for the RAM on my machine, I’ll just scale back the data volumes.

Merging Changes

When the time comes, I’ll update the development server on the client site. I do this by performing an automated file compare and deciding which objects are new and should be included, and which objects should be left alone.

It’s easy to make this decision, as the file compare tool shows you exactly what has changed and which version is more recent. Most even allow you to do a visual comparison of text files, which is very handy for RUX and PRO files.

The tool I use for this is an open source one called “WinMerge”. You can check it out here.

Working in a Team

So, you might be thinking, this is all well and good if you’re working alone or on a very small team, but what about large collaborative projects?

Certainly, larger teams provide a challenge, but nothing that can’t be overcome.

Often in this case, it’s useful to “stub” the design. This involves creating all the cubes and dimensions first, and leaving out the attributes, rules, and TI processes. That way each team member knows what other cubes will be called and won’t make the mistake of creating any objects with the same name.

Naming conventions often come into play here, too. I will often turn up to a project and make it my first order of business to distribute a standard naming convention document I have been using for years. You might prefer a different convention, or the customer might mandate one, but the important part is everyone understands it and sticks to it.

I’ve attached a sample document I used on a project years ago.

Revision Control

One concept I find useful for projects with larger teams is revision control.

This is a discipline well known to computer programmers that not only allows your team to keep a centralized copy development files (in this case TM1 server files from your data folder), but keeps track of all previous versions, who changed them and why.

The Basics

The idea is to keep a central repository of all files in the project and allow developers to “check out” the files to their local computer and work locally.

Once they have made changes, they can choose to “commit” their files to the repository, or to “revert” the changes they have made. If other developers are also making changes, developers perform an “update” to ensure they have the latest committed files.

It has other benefits, such as allowing the team to “tag” releases, so they can always get back to a particular version of the project, and for an individual developer to “branch” development (split it), for experimentation or creating an alternate version.

There are many other features and benefits to using revision control, which you can find by doing a Google search on the topic.

If a revision control system detects that two users have changed the same file, it does not allow one user to overwrite another’s work. It gives a “conflict” notification and allows the users to deal with that conflict.

For text files, you can often “merge” the changes made to both files, or, since revision control tells you who made the conflicting change, you can simply get in contact with the other developer and negotiate which file is correct.

But for TM1?

It may seem counter-intuitive to use a revision control system for TM1, as many of the files are not text-based, but I have found it very useful. Sure, you lose the merge functionality for non-text files, but you can still often perform a Winmerge “diff” and get enough information to work out what has changed and how to resolve it.

When dealing with TM1, you can exclude the TM1 control files from the revision control repository, as most have an “ignore” feature. This is important, because TM1 updates these files each time the server is started up, so they will always register as modified.

The main drawback I have found is getting team members to adopt it, as it does require some process and training to be used effectively.

Software Options

The tool we also use for Flow development is called Subversion, with the Visual SVN Server front-end. It is an open-source version control system that supports all the features thus described, and a nice MMC snap-in interface for configuring the server and permissions.

There are also various front-ends for Subversion. The one we use is Tortoise SVN, which integrates with the Windows shell and provides icon overlays to indicate if a file is up-to-date or has modifications.

Conclusion

Using some of these techniques, processes and tools provides a more flexible project environment by making it easier for developers to work on local copies of their models and providing a framework to merge the changes back to the central server.

Of course, many of the free tools in the Flow Toolbox Suite have the same goal and can automate some of these concepts even further, to make TM1 development even easier!

If you haven’t checked out the tools yet, you can get them here.

I hope it has been an interesting and useful discussion. If you have any questions, feel free to comment below.

TM1 Naming Convention Sample.pdf (105.55 kb)

An Item-based Approach – Quick Update

Working on the Flow Model Packager, I often have to make random changes to a model so I can test the model compare and deploy functionality.

Yesterday, I was using the item-based approach sample server as my test case, and decided to try to implement a more real-world change than I usually do (most of the time, I just add and delete things randomly for testing purposes).

This led to the idea of creating a new reporting cube that would ignore Category and Subcategory, but organize payments by “Payment Range”.

Using the item-based approach proved very flexible in this case, as I was able to add a quick rule and feeder to the input cube and create an entirely new cube which would report on information that wasn’t previously captured.

It also brought up some issues regarding re-triggering numeric feeders in TM1 10, which I will cover in an upcoming article.

For now, I thought I’d share the results for comparison — the model is attached. Happy modelling!

StringFeederSampleRange.zip (2.92 mb)

An Item-based Approach to TM1 Model Design

Introduction

James and I have been working together for some time now, and we have evolved toward what I would class as a non-typical design approach in TM1. For the sake of simplicity, I refer to this as an “item-based approach”.

It is very prominent in our implementation of Propel Planning, and underpins a lot of the power of that particular product.

I thought it worthy of some discussion, as it has many advantages, and a few gotchas, that are worth considering in any implementation you might be involved with.

The Approach

The item-based design approach has the goal of allowing data input in a flat, tabular format without giving up the browsability and analytic capability of highly dimensional cubes. It also separates input from reporting in a very elegant fashion.

You begin with an input cube, which should have only the basic dimensions, plus a measures dimension to represent the columns of the input table, and item dimension to represent an arbitrary number of rows.

The measures dimension will include many string elements which map to other dimension elements. Thanks to the picklist feature in TM1 9.5+, these lists can even be restricted to ensure invalid entry does not occur.

A separate reporting cube is then created that maps the string elements to actual dimensions, for reporting and analysis, usually via rules. This cube has no data entry and populates itself entirely from data in the input cube. You could also use TI to populate such a cube without too much trouble, for implementations that have higher data volumes.

I call it item-based, because this approach naturally requires an item dimension, with arbitrary names. Most of the time we just call the elements “Item 1”, “Item 2” etc, up to a maximum amount. Because this maximum is imposed, it is important to keep the efficiency of the model from being affected by the number of elements in the item dimension. More about that below.

Advantages

There are many advantages to such an approach.

Data-entry simplicity

New users of TM1 are often uninitiated, and, dare I say it, sometimes under-prepared by standard TM1 training courses, to understand the full advantages of a multi-dimensional database model. It doesn’t matter what you do, some users have spent way too much time in Excel or Access and simply think in terms of tables.

And why should they bother? Many of these users are simply data contributors, and do not have any interest in performing extensive analysis on their data.

The flat input approach allows such users to contribute their data in a way that makes sense to them.

It also allows them to adjust manual inputs and correct errors without cutting the data from one intersection and pasting it in another, an operation which can be error prone and, let’s face it, slightly buggy in the TM1 Perspectives cube viewer, and difficult in the TM1 Contributor front-end.

Maintainability & Agility

TM1 implementations are naturally agile and flexible. Developer with an IT background like myself might fight against it, and try to impose strict, inflexible Business Requirements and a rigid change request process to protect against scope creep, but that really undermines one of TM1’s key advantages in the market place: agility.

Imagine a retail sales model, which has Region, Sales Rep and Distributor as data points of interest. Sales reps and other users contribute data from the field using their laptops.

In a typical TM1 design, you’d create a cube with Region, Distributor and Product as dimensions. The input form would ask the user to select elements from each of those 3 dimensions and would write the sales/inventory data to the intersection of the elements chosen.

All is good, and the managers and finance staff can browse the cube and get the insight they need.

However, imagine, after months of data has been collected, someone in head office decides they would like to also track the data by Customer Type. The data already exists in the point of sale system, as each customer is tracked by credit cards and loyalty cards they use when making a purchase.

With your typical design, you don’t have a lot of choice, but to redesign from scratch and create a new cube with the additional dimension. You might choose to keep the existing cube for backward compatibility, in which case you’d have two sources of the same data, which could lead to synchronization issues since the original data is manually contributed from Sales Reps in the field.

It’s your basic nightmare, and if you were halfway through the implementation, you’d probably tell your customer that it’s a change in scope and that it would have to be left to phase 2.

With an item-based approach, you don’t have these issues. You can take the new data from the POS systems, import the Customer Type field via TI (while creating the new Customer Type dimension on the fly), then update your reporting cube and rules.

Yes, you still have to do some basic redesign, but there is no requirement for a complex and error-prone data migration.

Contributor & Insight-friendly

TM1 Contributor (or “Applications” as it’s now known) and Cognos Insight, are great front end tools for data contribution. They are a little weak, however, when it comes to customizing views to be friendly for the end-user. A highly dimensional input cube forces the view designer to choose between unworkably large grids or many laborious title element selectors which make cutting and pasting data difficult.

A flat, item-based input cube is much simpler to work with, supports multi-level cut and paste, and presents itself in a more logical fashion for quick data input. String values can be typed in as well as selected from a list, then copied down as necessary.

Downsides and Gotchas

Performance

If you’re not careful, this design approach can tempt you into inefficient rules and over-feeding. Performance can suffer with large data volumes.

However, with better design and a clean rule-based approach this can be avoided. Over-feeding is not necessary and rules can be structured logically and efficiently.

As always, TM1 has it’s quirks, but once you understand the gotchas associated with this design approach, they are easy to avoid or work around.

I’m planning several follow-up articles that will go through these issues in detail, and how to make sure they don’t have you pulling your hair out.

Complexity

Rules in this design approach can appear more complex and be harder for another developer to understand. I have first hand experience of handing over such a design to very capable developers and having them screw up their noses and replace my cubes with a more standard TM1 design.

I believe this is partially a cultural issue, as TM1 is taught in a particular way, and that has become accepted as “correct”. Once a developer adjusts to this kind of thinking, it’s actually very difficult to go back!

Obviously well-formatted rules and code comments can go a long way to alleviating this issue also.

Limitations

There is a natural limitation imposed by the item-based approach, and that is the number of elements in the item dimension forms a maximum number of “slots” for data input.

To avoid the situation where a user does not have enough “slots” to input their data, a developer might be tempted to include a large number of elements in their item dimension, and, if the rules and feeders are designed poorly, this could lead to poor performance.

However, a well designed cube won’t need a lot of input slots, as the average person is not able to navigate, or even usefully perceive, thousands of rows of data!

In our retail sales example above, there may be many sales items entered, and at first glance, it may appear to require a form with thousands of visible items shown. With a bit of thought, it usually possible to group input tasks meaningfully so that only a useful number of items need to be shown for the current input task. For instance, the Sales Rep could enter only the data for the particular store they are visiting, as they most likely wouldn’t be entering data for several stores simultaneously.

And, either way, a more dimensional approach does not mitigate this problem either!

Conclusion

With a bit of planning and thought, an item-based approach to TM1 development offers many advantages and rewards.

My follow up articles will be based on a simplified example, which is attached to this post for you to download and examine. The example is built and tested in TM1 v10, with the ForceReevaluationOfFeedersForFedCellsOnDataChange set to “T”.

Feel free to leave comment, criticisms, or unfettered praise in the comments section below!

And yes, that is the name of the property!

StringFeederSample.zip (2.98 mb)