What is Spark Monitor?
So what does Spark Monitor actually do for you and your clients?
First up, it gives you a way to constantly monitor your server resources, including hard drive space, memory and CPU usage. Often these benchmarks are the first indicator that something may have gone wrong with a TM1 server, so they can be very useful metrics.
In addition, Spark combines the information from TM1Top into a neat and easy-to-read chart, with a table showing the current threads and users.
There is also a very useful table that shows the most recent TM1 log entries, so you can monitor any Turbo Integrator processes and their return status. You can even perform advanced searches of the log files like you can in TM1 Perspectives.
Spark provides you with all these features in a hosted online environment, so all you need to get started is a web browser. You can even make the stats available to read-only users in your client’s organization.
InfoCube Spark is a hosted-only service at the moment, which keeps things simple, but at the same time might be a problem for highly-secured TM1 servers that can’t get internet access.
How does it work?
Spark Monitor uses a very simple and easy technique to ensure the monitor is constantly displaying up-to-date information.
A Monitoring program is installed on the TM1 server and scheduled to run periodically. This program uses the TM1Top and Windows APIs to gather data about the TM1 server and send it to the Spark web application.
The clever part is that this program pushes all the required server information up to the Spark web application (I assume via a web service), rather than having the web application try to pull data down. This means all you need is a regular out-bound internet connection on your TM1 server, while keeping your TM1 model data and structure secure with your regular firewalls and other protections.
Since Windows Task Scheduler is used to trigger the program to run, you can pretty much schedule it any way you want, to make sure your server is monitored at the frequency you desire.
Setting it up
The instructions for setting up the Spark Monitor are available on the website, but I’ll detail a few gotcha I found here so you don’t fall into the same traps.
Firstly, you need to make sure you register your account with the Spark website and then add a new server. The names you give here are for your own identification purposes only, and do not necessarily need to match the names on the TM1 server (although it would make sense to name them this way).
This will get you an automatically generated “server key” which is the identifier that links the server entry on the Spark web application with the Monitor program installed on the TM1 server.
The Monitor program does not come with an installer, so I simply copied it into the “Program Files (x86)” folder manually. You can pick any location you like, as it will eventually be run automatically by Task Scheduler and you won’t have to worry about it.
If you run the Monitor program once, it will create an empty config.cfg for you, which has the basic settings you need. I found it left out one setting, tm1s_bin, so I had to create that manually in the file. The instructions do a pretty good job of helping you get that set up correctly.
If you create your server instances with the Flow Server Manager, like I do, you’ll need to locate the tm1s.cfg file for use with Spark. To do so, just go to “C:\Users\(your username)\AppData\Local\Flow OLAP Solutions\Flow Server Manager\ServerManagerStore” and search for the correct tm1s.cfg in the sub-folders there.
That reminds me to add a “Open tm1s folder” option in the Server Manager to make this sort of thing easier!
Once you’ve got the config.cfg file set up correctly, you can run the program by double clicking and test the results. If you log in to Spark and see your data, you’ve been successful.
If not, the best way to troubleshoot the problem is to look at the Monitor.log file that gets created in the program folder. That will usually tell you what’s going wrong and give you an idea what is configured incorrectly.
Once the program is running correctly and sending information to the Spark web application, all that’s left to do is schedule the program to run periodically. This is very easy if you follow the screenshots on the help page, but I just need to mention one gotcha which cost me an hour of frustration!
Due to this Microsoft bug (or is it a feature) in Task Scheduler, you need to make sure you don’t include quotes around the “Start In” folder when adding it to Task Scheduler. If you do, you’ll get a very ungraceful failure, with an error code and a help link that goes to a missing Microsoft web page!
This is a very useful tool for TM1 administrators and IT departments, and one which will present well in a sales presentation, especially with a technical or IT audience (assuming you can get internet access during the demo).
The functionality is great, and expanding very quickly, as Ben Hill and his team are working on it actively at the moment.
I got in contact with Ben to discuss the product and give him some feedback. He was very responsive and enthusiastic about the product, and when I pointed out a minor security flaw I found in the system, he had it fixed within minutes.
Next on his to-do list is the ability to create “consultant” accounts, which would allow TM1 partners to create and manage server groupings for multiple customers. This would be a great addition, as the majority of Spark users will probably be TM1 partners or consultants with multiple clients.
At Flow, we applaud Ben Hill and InfoCube for this initiative. It’s great to see other companies giving back to the TM1 community for the greater good of the industry, and will ensure they get our full support.
To improve the user experience, I had a few items on my feature wish list.
First up, the Monitor program could be improved considerably with the addition of an installer and a configuration UI. This would avoid the need for manually copying files, editing configuration details, and messing around with Task Scheduler. Those gotchas I listed above could all have been avoided with an intuitive setup and configuration application.
It appears to have an “automatic update” application included with it, but I did not test that, as I could not see any instructions for it on the Spark website. It would certainly be nice to have the program automatically update itself if need be.
On the web application side, a few other features would be highly desirable.
A notification system that would email you when certain triggers are met, such as the server disappearing, or memory creeping over a certain % would give the program added depth. If users could subscribe to notifications and even create their own trigger thresholds, all the better.
I would also like the ability to edit server details once they have been added. Right now, if you want to change something, you have to delete the server, then add it again, which means you get a new server key and have to dig into your Monitor config files again.
And last but not least, I’d like to have the ability to make the dashboards refresh on a specified schedule, without having to repeatedly click the browser’s refresh button. Even better, the web application could support dynamic (“ajaxified”) screen refresh, so the charts and other dashboard elements could update without refreshing the entire screen.
Given that the Spark web application already predicts how often your Monitor program is configured to update, I would suspect that this functionality is already in the works.
Minor quibbles aside, the InfoCube Spark Monitor is well worth adding to your TM1 bag of tricks.
It’s a completely free service, so why not take advantage of the value it adds for you and your clients?
I’ll leave you with a few screenshots of the application.
And, as always, happy modelling — or in this case, happy monitoring!