From Agile to Anarchy (and back again)


TM1 development has undergone a subtle evolution over the years.

As a relatively early adopter (I still refer to TM1 as an Applix product occasionally, and I remember the original version of TM1Web) of the technology, I’ve watched this evolution with interest, and feel what I have been doing with Flow, is an attempt to bring that evolution full circle and get back to those early glory days.

In this article, I reminisce, wax lyrical, and take a look at the present and future state of selling and implementing TM1.

Warning: this post may contain heavy doses of nostalgia!

Agile Roots

Anyone else remember a time when the typical TM1 implementation went as follows:

  • Get the client’s general ledger accounts and data
  • Write a few TIs to create the account and cost center structures, and suck in the data
  • Build some Excel reports to show the new data off
  • Hook up a few manual input templates for the tricky stuff
  • Send the users to a TM1 training course
Yes, these were simpler times, times when “Excel on steroids” was the sales pitch and demonstrating the “what-if” feature would get genuine wows from the crowd.
We used to sell in a couple of meetings, build POCs in hours, often while the potential client watched, and put together an entire system in weeks rather than months.
Perhaps we can remember them fondly, even wistfully. But, sadly, it has been a long time since I was involved in a project that simple, and I believe those kinds of projects are, for the most part, behind us.
Now businesses expect a full budgeting and planning application, capable of multiple scenarios and rolling forecasts, and able to collect and collate data from many disparate sources around the globe.
TM1 has evolved somewhat to try to meet these needs, but have we evolved as consultants?

Agile Decline

As the Agile methodology became popular in IT and software development projects, those of us in the TM1 sphere were starting to realize we were becoming less and less agile.

I recall speaking on the phone with the owner of a TM1 distributor, discussing the possibility to working with them. This must be two or three years ago now. To my surprise, he started talking about sitting with the customer on a time and materials basis, and building the model with them as they watched and participated.

Of course, I said, “you can’t work like that! We’ve got to lock down the requirements in a formal document, perform a technical design to those specifications, and restrict any modification with a formal change request process!”

It was at that point, in the back of my mind, that it hit me how much TM1 development had changed. The willingness to sit down with a customer, discuss their needs, and build a solution had been replaced with a fearful and almost paranoid IT mentality.

I realized that TM1 modelling and development had become as complex as software development projects, and had evolved to encompass the same rigid processes. The very thing that had originally attracted me to TM1 development — the freedom to build a solution without inflexible requirements — was now gone.

The Problem

So how did we lose the Agile edge that used to define and differentiate TM1 implementations? How did we go from a strong customer focus to formalization, obfuscation and general ass-covering?

The answer is simple. TM1 got bigger — and I’m talking bigger in every way.

Firstly, it was acquired by Cognos, then IBM, and was suddenly thrust into the light of big business. No longer was TM1 the surprising underdog. Now it was expected to go head to head with its enterprise-ready robust big brothers and hold its own.

Concordantly, TM1 started getting larger, more complex implementations. Pre-sales guys who had once gotten away with using the word “scalable” to mean you could add lots of RAM were now being asked if TM1 could be used in a server farm, collecting data from thousands of disparate data sources across global WANs, to calculate an entire organization’s planning, forecasting and consolidation in near real time.

And as a result of all this, we as TM1 implementors got scared. And those of us with an IT background knew exactly what to do: add more layers of process.

However, TM1 did not have the tools to support the Agile processes we were used to following. Deployment was done by manually copying files. Testing was done by manually clicking a mouse. And demonstrations to the customer were performed sparingly, as they took much time to set up and present.

Worst of all, providing any kind of workflow for the customer was embarrassingly lacking. Sure we could fire off external scripts to send out email notifications or SMSes, but the solutions were hardly robust or maintainable.

So we fell back on design and documentation as the crutch to get us through. Write reams of documentation, force the customer to sign off, then quote and build based what was “agreed”.

The fact that describing a financial model in a generic way was often more difficult than building it was neither here nor there.

Reclaiming Agile

Many old-school TM1 implementors have noticed this change, at least on an instinctive level, and tried to develop processes and methods to get back to the old ways. However, most of these were just band-aid solutions, and fell short of tools found in other areas of software development.

Watching this with frustration over the past few years led me to take a step back and look at the bigger picture and think through the problem without letting myself be clouded by prior assumptions.

Flow OLAP is the result those musings, and we hope that our partners are finding value in the tools and applications we’ve released so far.

However, this is just the tip of the iceberg. Keep giving us your support, and we promise to keep innovating until TM1 development has come full circle, and we can reclaim our Agile glory days!

Hey, I warned you about the nostalgia, didn’t I?

Big Data is like teenage sex

There is a saying in the marketing world at the moment and it is quite topical for us as an industry, the saying is:

“Big Data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”

I’m not alone in these thoughts other bloggers have discussed this before and it always makes for good reading when someone outside of the business intelligence world discusses what they actually see.

If you have been in and around BI for many years you will remember the transition, the acronyms and the “BI speak” that has constantly been evolving.

Ten years ago very few people in C-level roles had even heard of business intelligence, let alone the term BI, yet we used it . Then there were the cipher codes (acronyms and jargon) we invented by abbreviating all of the terms or inventing new ones. BI we already mentioned, then there was ETL, BRD, TRD, FPM, CPM, Datamart and many others that sometimes we let slip in front of non-technical clients. Added to this there were also the slogans that we liked to impress people with, such as “one version of the truth” or “real time reporting” and even “self-service reporting”

However, Big Data has to be one right up there with the most creative and befuddling of terminologies that has been invented. Think back to the first time you heard the term and what it actually meant to you. I pictured it as the ones and zero’s being far bigger than in normal data, therefore it was a very fat bytes.

I bet many people who read this will recall as much jargon as I have.

So is it technical people who come up with these terms? I doubt that very much, it is undoubtedly us marketing people who invent “sound bytes” that distinguish what we do from the competition. Our industry is not the only one that re-invents terminology, there used to be something called a bureau service and now that is called “in-the-cloud”.

What are the most unusual terms you have heard in the IT world?

Self Service versus Self Destruction

For years we have all known that there are many different parts to the BI jigsaw puzzle. Originally it was quite clear that a finance division was the clear leader when it came to driving a solution for a company. The initial work on these projects produced a static outcome that included information like a general ledger and the accompanying reports they needed. We can even use a uniformed cube for each of these models in todays environment.

The calling for “self-service” reporting, or ad-hoc reports was not as prevalent as it is now. The reason that this did not create too much work for developers was that most accountants had a great knowledge of excel.

The real underlying issue has always been that to be truly gifted in “self-service” reporting you needed to be a “power” user. The number of true business report developers was hampered by this hurdle. However, I don’t believe that this was necessarily a bad thing from a client perspective.

The reason I say that is we have all heard the term “paralysis by analysis” and if you give self-service to everyone that is exactly what you will get.

The loss of man hours due to an employee not being able to undertake their primary work function, due to a desire to create mind numbingly beautiful reports will become horrendous. It is very difficult to see a safe balance if all this coding work is required to be undertaken inside a clients business.

Therefore, offering a client greater freedom to build reports has to be tempered by the fact that some of their staff will always be willing to spend valuable time to build a better report. Clients need to be made aware of the issues that BI products can cause to their human resources and also need to understand that it is far more efficient to allow a dedicated developer to undertake this for them.

Obviously, the argument will be the cost of an external developer to the cost of an internal resource. However, you are not only investing in development expertise, but you are also getting a person with a wide array of similar projects. Thus, an external resource will always have a greater experience base than someone who is restricted to one company. They still will have expertise, however they will miss out on the spectrum of experience that a pure consulting company undertakes.

There is also a great argument for static generic reports that give clients close to a full solution, especially ones that give a little flexibility when it comes to the final offering. So next time a client puts “self-service as their number one aim, just take the time to find out why.

Team preparation should be a Methodology

What is team preparation?

Is preparation for your team part of your strategy or is it a tactic that needs to be undertaken everytime you are about to start a project?

I believe it is all about team collaboration which on a macro scale definitely falls under a strategy. So what do I mean by team preparation and collaboration?

Everytime you undertake a project you gain not only valuable experience but methodologies for undertaking other projects. For example, if you undertake a Revenue Model for a finance area, the basic structure is the same for every other similar model at other companies.


Preparation is not isolated from Project Management

Thus, if you decide on a generic model for doing the projects you can already utilise standard cubes for every Revenue project, standard processes and objects. In fact the only thing that realistically needs to change is the naming convention for the new cubes and dimensions. To ensure that all of your team benefits from this experience you need to ensure there is some sort of centralised area where you can keep these universal cubes and obviously the documentation that assists them to use these tools.


Your Toolbox

If you are a person who is tasked with implementing a particular project, then taking a fully equipped toolbox can certainly reduce time and increase the chance of customer satisfaction. So what do I mean by toolbox?

1. Almost all projects that are undertaken in BI usually end up as a line in the General Ledger somewhere, so always keep in mind that other projects need to dovetail into yours. Thus, things like naming conventions can be critical, so developing a generalised naming convention manual will enable you to quickly and successfully standardise names across a business. Remember that this manual is important to be shown to a client upfront and left on site as part of the documentation, here is an example.

2. The iterative approach really means that you are looking to the client as a member of your team. If that is so, then it is very essential to deploy each iteration as soon as it is complete. There are a number of good processes to aid this practice, such as standardising server infrastructure on each project. Ensure you have three types of servers; Production, Development and Staging. The staging server is used for UAT and thus won’t interfere with your next iteration, as with an Agile approach UAT does not necessarily need to be at the end of the programme. With the three server structure you have three “parallel streams” that can be independent and unreliant on other tasks that are being undertaken.

3. Developing some inhouse software solutions that enable you to deploy iteratives fast and efficiently can be of great value. These are usually developed by people with great experience and enable you to deploy each step as it is complete. With this method, it will also help you to decide which parts of each project are going to be undertaken in each step. Don’t be apprehensive about researching online to see if someone else has already developed the tools you need.

4. Make sure your toolbox includes as much uniform documentation as you possibly can. For example standard BRD’s, UAT Questionnaires and Technical Requirement Documents will save you hours of work for the BA’s and Developers when piecing the project together. Also, many of the projects you undertake will be of a similar nature so having pre-developed cubes for say GL projects (such as Capex, Look-up cubes and Revenue are good examples) and grouping them under different industries will be a godsend. They will help you do speedy and efficient Proof of Concepts and will save you mountains of time with your implementation. If you can find a way to enable  these models to be stored as an object and develop your own or buy tools that allow you to modify them (ie quickly change names), it will ensure you are way ahead of the curve.

The last thing is to ensure you have other tools to enable you to make your solution perfect. Tools that enable you to set-up email notifications, tools like Winmerge for comparing and detecting any changes that have been undertaken and also maybe something like SVN for version control.



As you can see a well equipped Toolbox will greatly assist your development team as they will always benefit from all the other projects and experience that the whole company have previously undertaken.