← back to articles

Activity Metrics

Activity Metrics in MREF/TRIRIGA

We needed to understand what our customers were actually using in the product. Not just "are they logged in?" but what disciplines, what features, what value they're getting from the platform.

That's where activity metrics came in.

What are Activity Metrics?

Maximo Real Estate and Facilities (or MREF), formerly TRIRIGA, is an integrated workplace management system with capabilities across multiple disciplines - space management, reservations, capital projects, maintenance, you name it. But here's the thing: not every customer uses everything. Some use it purely for space planning, others for facilities maintenance, some for everything.

We needed a way to measure usage within each discipline. The goal was to capture metrics that actually indicate system use and value realization for the primary personas in each area. Then sum these up into an aggregate monthly metric - the North Star Metric.

Identifying What to Measure

This was the longest part of the process.

We had to map out every discipline and figure out what metric actually matters for that discipline. For space management, it's space allocations created or updated. For reservations, it's the number of active reservations. For work tasks, it's tasks created and completed.

The challenge wasn't just picking a discipline. It was picking the right object to query within that discipline.

Take reservations, for example. There's the reservation manager, reservation definitions, reservation instances, events - a whole hierarchy of objects. We had to work with SMEs from each module to understand which object actually represents meaningful usage. You don't want to count every single definition or configuration change. You want to count the actual reservations that people are making.

This took weeks. Getting time with SMEs across all the different disciplines, understanding the data model, making sure we're measuring what actually matters. But it was necessary. Get this wrong, and the metrics are meaningless.

Implementation

Once we knew what to measure, implementation was pretty straightforward.

We already had infrastructure for running SQL queries in the application platform. Made sense to use that rather than building something in the front end. Two reasons: performance and simplicity. We didn't want to add any overhead to the user experience, and backend queries are just easier to maintain.

I wrote some of the SQL queries for the different disciplines. This was pretty fun, actually - getting to dig into the MREF data model across all these different modules. The queries had to be efficient since they'd be running regularly, so there was some optimization work involved.

The SQL itself wasn't complex, but there was a lot of it. Multiple disciplines, each with their own objects and relationships to query. And since this was a relatively low priority item, it got done in between other work.

We also decided to get the total count instead of the delta, just so we can see a total. Here's a simplified example of what we were doing:

SELECT
  COUNT(*) AS ACTIVITY_COUNT 
FROM
  BUILDING_TABLE T
WHERE
  T.MODIFIED_DATE > (SYSDATE - 1);


Multiply that by every discipline, every metric, with all the proper joins and filters, and you get the picture.

Passing Data to MAS Core

After MAS 9.1, both Maximo and MREF run on the same base engine - MAS Core. Made sense to pipe these metrics there rather than building a separate system.

The data gets aggregated and sent to MAS Core, where it can be analyzed alongside usage data from Maximo's other products. This gives customers a unified view of how they're using their IBM asset management suite.



It is live now. Customers can see their usage metrics broken down by discipline, track trends over time, and understand which parts of MREF are delivering value to their organization.

The whole process - from understanding requirements to going live - took a few months. Most of that was the planning phase, working with SMEs to get the metrics right. The actual development and implementation was maybe 20% of the total effort.

But that's how it should be. Get the requirements right, and the code writes itself.