Contact Us

Open KPI Store

The Open KPI Store is part of SAP Focused Run. It makes it possible to store any time series of numeric values with some identifiers in one generic table and display the result in a chart of System Analysis. The sender of these values can be anything from an APM (like DynaTrace or Introscope), a script running on a host, a weather station or coffee machine. The sender just needs to send the data in a given json format (see data format).

Step-by-Step Example

A walkthrough for using the OpenKPI store in the context of a simple example is described in the Step-by-Step guide.

Concepts

Metric / KPI (Key Performance Indicator)

A metric represents one time-based numeric value series. In most cases it will be based on a continuous stream of time points of fixed frequency (for example one point per minute). One point in time can only hold one metric value (number with 25 digits whole part and 6 decimal digits). Uniqueness has to be guaranteed by the sender and will not be checked.

A metric can have a unit (20 characters) and a preferred aggregation method (3 characters, MIN, MAX, SUM, AVG, CNT) which will be used for any non-time based aggregation. 

The metric description can be up to 128 characters long and should be unique for the sender, since it is used for dashboard selection and alerting. It is possible to enable self-monitoring for every metric individually. When self-monitoring is activated the Open KPI framework will trigger an alert if the last incoming data push requests did not provide any data for the specific metric.

A sender can have up to 255 different metrics.

Example: File size in MB with the dimension file name. The metric aggregation will be SUM, so that the metric value is aggregated as a sum over every maintained file name. 

Dimension

Additionally to the time dependency a metric can have up to 18 character-based dimensions. The metric value for one point of time and one dimension value tuple needs to be unique. This uniqueness has to be guaranteed by the sender and will not be checked.

The dimension description can be up to 128 characters long.

Dimensions are unique to the sender, not to one particular metric! If a sender has multiple metrics, they all share the same dimensions. Dimensions are also independent from each other.

The first dimension value can be up to 512 characters long, each others dimension can be up to 128 characters long.

Sender

A sender represents one generic data storage within the Open KPI Store. It can be interpreted as one monitored object that has exactly one sender type. In that case compared to system monitoring the sender would correspond to one particular technical system like ABCADM~ABAP. The sender could also be interpreted as a group of similar metrics with similar dimensions. In that case the monitored objects could be modeled as the first dimension. For example multiple hosts could send their data to the same sender with different values as the first dimension.

A sender has a 32 character long GUID (globally unique Identifier) that is self-given. It is highly recommended to use a randomized alphanumeric string for this, so that it is unlikely that multiple senders feed the same data storage by accident. The sender also has a 128 character description and a 40 character sender type. The sender type specifies similar senders that feature common attributes. A sender can have up to 5 attributes (128 characters). The labels of these attributes can be maintained per sender type (using the configuration menu from within the System Analysis UI).

Reasonable attributes might describe the location (data center, geographic location like country, region, city) or the technical details of a device (brand, manufacturer, model name).

When comparing to system monitoring, the sender type can be mapped to a technical system type like "ABAP", "JAVA", or "HANA" to categorize the monitoring objects. A sender can then be interpreted as an instance of that sender type.

It is recommended to model a sender type in such a way, that the metrics and dimensions of senders within this sender type are the same or at least comparable. If different senders deliver metrics with the same name but different semantics the end user has to take care about this.

Housekeeping

The Open KPI Store is part of the RCA housekeeping job SAP_FRN_RCA_HOUSEKEEPER, which will partition the data and log data base tables. It will also delete any data entries or logs that are older then 30 days. This can be customized using SM30 view rca_hkconfig.

Sender meta data will not be cleaned up automatically.

Data Format

Data must be sent in POST request bodies to the REST end point https://<frunhost>:<httpport>/sap/bc/rest/rca_gs. Replace the place holders <frunhost> and <frunport> with the actual values, e.g. copied from the FRUN launchpad URL. The request must authenticate with valid user credentials. The authorization SR_SYSANA with activity "01 - Create" must be assigned to this user. The POST body with content type application/json has the following format:

  • The POST body for the Open KPI Store consists of an array of JSON objects. Each JSON object corresponds to exactly one sender and has three parts: 
    • guid: Each sender must send a unique identifier. This can be a 32 character GUID or a 32 character hash from fields that makes itself unique. E.g. host name with concatenated value. This value is handled as the key for the sender and is not displayed in the UI.
    • meta: the meta data describing the sender and the metric data. After initial creation the meta data part is optional. Any change to the meta data can result in inconsistent data!
    • data: The numerical time series data points

The meta sub object contains the following properties. Properties marked with * are mandatory for sender creation:

  • description*: human readable description or name of the sender, e.g. a host name
  • type*: classification of the sender that can be used for logical grouping and filtering, e.g. "Host"
  • dataencoding*: Encoding for dimension values. One out of STRING (encoded as specified in request content type), UTF-8, BASE64.
  • attributes: optional set of attribute values of a sender, e.g. IP address, geographic location properties. These attributes can be used for filtering senders in the scope selection. The attribute names can be maintained separately in the Open KPI Store settings of the System Analysis application.
  • metrics*: enumeration of all metrics that this sender provides. At least one metric must be provided, and up to 255 metrics are supported. For each metric the following properties are specified:
    • description: human readable metric name. Should be unique for all senders of same type.
    • unit: The unit of measure to be displayed in the UI (max 20 chars).
    • aggregation: How to aggregate metrics cross sender and time. Possible values: AVG, SUM, MIN, MAX, CNT. Default is SUM if omitted.
    • selfmonactive: Enables self-monitoring for the metric if set to the value "X".
  • dimensions: list of dimensions that apply to the metrics. Up to 18 dimensions are supported. In the case of host-related data this could be e.g. file system name or network interface name.

The data property contains a nested array of the actual data points: Each data point is an array of the following values:

  • timestamp in the format YYYYMMDDHHMMSS
  • One to n metric values corresponding to the metrics specified in the meta.metrics section. The number of metric values should exactly match to the number of metrics specified in meta.metrics. The metric values must be ordered in the same way as in the meta data, i.e. the metrics are identified by position.
  • Zero to n dimension values corresponding to the dimensions specified in the meta.dimensions section. The number of dimension values should exactly match to the number of dimensions specified in meta.dimensions.

The following constraints apply:

  • Similar senders should use the same type. This allows a better selection in the UI later.
  • If the meta data is missing and the GUID is unknown the request will return an error message!
  • Since the POST request body uses JSON format, JSON control keys (",:{}[]''=) are not permitted for descriptions and values. If it is possible that the data contains such characters, the data has to be send encoded. Possible values for the data encoding are UTF-8,BASE64,STRING.
  • Metric and dimension descriptions are position based. Same applies to the attributes.
  • Data for multiple senders can be pushed in a single call: The outermost JSON array then contains one JSON object for each sender.
  • The data is list/array based. One data line is a list of the values for the time stamp, followed by the metric values, followed by the dimension values. The value for a metric can be empty, specified as "". In that case it will not be interpreted. An initial metric value should be "0". If for a specific time stamp and combination of dimension values a metric value cannot be given or the dimension values are not applicable to a metric then the position of this metric value must be filled with "". The position of this metric must not be skipped completely. For example, if you specified 20 metrics in the meta section but in the current request only want to report the value for metric 17, then the data point would look like 
    [„timestamp1“, „“, …, „“, „val17“, „“, …,““, „dim1“,“dim2“,”dim3”].
  • If a data line cannot be interpreted (for example the metric value is not a number) it will be ignored.

POST Request Structure

[
{ "guid":"12345678901234567890123456789012",
    "meta":    
            { "description":"Sender description",
               "type":"Sender type",
               "dataEncoding":"STRING",
               "attributes":[{"value":"Attribute1 value"} {"value":"Attribute2 value"}],
               "metrics":[{"description":"Metric1","unit":"MB","aggregation":"SUM"},
                                    {"description":"Metric2","unit":"°C","aggregation":"AVG"}],
               "dimensions":[{"description":"Dimension1"}, 
                {"description":"Dimension2"}]
            },
  "data":[
       ["timestamp1","Metric1 value","Metric2 value","Dimensions1 value","Dimension2 value"],
       ["timestamp1","Metric1 value","Metric2 value","Dimensions1 value","Dimension2 value2"],
       ["timestamp1","Metric1 value","Metric2 value","Dimensions1 value2","Dimension2 value1"],
       ["timestamp1","Metric1 value","Metric2 value","Dimensions1 value2","Dimension2 value2"],
    ...
      ["timestamp2","Metric1 value","Metric2 value","Dimensions1 value","Dimension2 value"],
    ...}
]


Examples

Example 1: Post request for host-related data. Note that the pseudo-comments are just for illustration and would not be valid for the JSON payload.

Example 1

[{
 "guid": "57b03fd8345531de708430494a9e1629",
 "meta": {
      "description": "myhost1",
      "type": "Host",
      "dataEncoding": "STRING",
       "attributes": [{
             "value":"10.20.30.40" // IP address
        },{
             "value":"Linux" // Operating System
        },{
             "value":"dc-eu-1" // data center
       }],
       "metrics": [{
           "description": "CPU Usage","unit": "%"
        },{
           "description": "Disk space available","unit": "%"
        }],
       "dimensions": [{
             "description": "File System"
        }]
 },
 "data": [
  ["20190116071900", "55.75", "", ""], // metric value 1 = CPU Usage, metric value 2 empty, dimension empty since not applicable for CPU
  ["20190116071900", "", "95.0", "/usr"], // metric value 1 empty, metric value 2 for file system /usr
  ["20190116071900", "", "95.0", "/var"], // metric value 1 empty, metric value 2 for file system /var
  ["20190116072000", "80.34", "", ""] // next time period
]
}]



Example 2: Post request for data from weather stations. Here it was decided to also add the country code and city name as dimensions of the metrics. This is not necessary - basically the designer of the data collector can decide which modeling fits best.

Example 2

[{

"guid": "265FBF6CABF91ED2F3B7DCFEC3AE42BC",
"meta": {
    "description": "Frankfurt",
    "type": "Weather",
    "dataEncoding": "STRING",
    "attributes": [{
        "value": "DE"
    }, {
        "value": "Frankfurt"
    }],
    "metrics": [{
        "description": "Temperature","unit": "°C","aggregation": "AVG"
    }, {
        "description": "Humidity","unit": "%","aggregation": "AVG"
    }, {
        "description": "WindSpeed","unit": "km/h","aggregation": "AVG"
    }, {
        "description": "Barometric Pressure","unit": "hPa","aggregation": "AVG"
    }],
    "dimensions": [{
        "description": "Country Code"
    }, {
        "description": "Station Name"
    }]
    },
    "data": [
        ["20190606133000","18","59","04","1013","DE","Frankfurt"]
    ]
}]

Validator

The Open KPI validator serves two different purposes:

  • Act as prototyping tool to compose example OpenKPI payload without the need to work with JSON files. Instead, OpenKPI payload can be defined interactively via a SAPUI5 controls. Existing metadata (attributes etc.) is also applied to assist entering data.
  • Act as validation and troubleshooting tool for existing JSON strings for OpenKPI payload. It allows to identify and correct mistakes by pointing out any deviations from the expected structure.

In both cases it is possible to directly send OpenKPI payload to FRUN using the current user account.

Compose OpenKPI Payload

Use the Create JSON Configuration tab to visually compose valid OpenKPI payload.

First click the + to create a new sender. You will see warnings whenever the information in the current object is incomplete. Expand the newly created sender via >. Choose a Type from the dropdown list. Fill in a sender name in the Description field. You can accept the GUID for the sender object or fill in any other unique identifier. For the Encoding you will most likely want to stay with UTF-8 or STRING.

Next, fill in attributes for this sender: Use + in the Attributes line to create lines attribute values. If you have maintained attribute names in the configuration they will be displayed here and you can fill in attribute values of the current concrete sender. A warning will be issued if the number of attribute values for the sender does not match with the number of names defined.

In the subsequent section list the metrics that will be reported for the sender and optionally dimensions. Again create new lines via +.

In the last section you can create actual data points. Based on the meta data of the previous sections the metrics and dimensions will be shown.

The bottom part of the screen shows the JSON payload corresponding to the data that was composed visually step by step. You can use Upload button to directly send the payload to your Focused Run system, using the current user account.

Validate OpenKPI Payload

Use the tab Validate JSON Configuration to parse and inspect existing JSON payload.

You can paste JSON payload into the input area. Any errors will be indicated by warnings on the top. Any valid OpenKPI senders will be displayed as list below the input area. You can use the beautify button to pretty print the JSON text and the upload button to send the JSON directly to Focused Run.

Alerting

There are two flavors of alerts available for the OpenKPI store:

  • Self-monitoring alerts
  • Metric alerts

Self-monitoring Alerts

To get alerts when the OpenKPI data volume substantially changes (it drops or increases), or if one of the sender type limits is hit you can use self-monitoring alerts. They are activated automatically. In addition you can activate self-monitoring alerts if data points fail to arrive for an expected metric. For this purpose you can set the flag “selfmonactive” for a metric in the JSON payload.

Metric Alerts

Alerting on metrics values is available starting with Focused Run 3.0 SP00.

How to Configure Metric Alerts

In the System Analysis application open the configuration panel via the header button. Open the alert configuration dialog via the “warning” icon in the OpenKPI Settings area.

Click the “Create New Alert” to start defining a new alert.

The overview tab offers the following options:

  • Description: plain text description of the alert
  • Sender Type: Choose an existing sender type for which to define alerts
  • Active: Use this toggle to enable/disable alerts. Alerts will only arrive in the alert inbox if the alert is active here.
  • Documentation: Additional text that shows up in the documentation tab of the alert inbox
  • Namespace: LMDB namespace (optional)
  • Type: (worst case / best case) choose when to trigger an alert: If the first sender hits a threshold (best case), or if all senders hit a threshold
  • Severity: 0—9
  • Notification Variant: You can either activate the checkbox to use the notification variant that is globally configured as default for Advanced Root Cause Analysis in the Alert Management configuration or explicitly select a variant for this alert.
  • Outbound Connector Variant: You can either activate the checkbox to use the outbound connector variant that is globally configured as default for Advanced Root Cause Analysis in the Alert Management configuration or explicitly select a variant for this alert.

Next, navigate to tab “Metrics” in the alert configuration dialog.

Click “Add New Metrics” to select metrics for the alert: You will get a dialog listing all metrics defined for the selected sender type. Choose metrics for which you want alerting and confirm the metric selection dialog with “OK”. The metrics are listed in the metrics table then. For each metric use the “>” in the table row to navigate to the detailed definition of a metric. You can define:

  • Description: shows up as metric name in the Alert Inbox
  • Unit: unit of the metric (currently does not show up in the Alert inbox)
  • Direction (Above or equal to threshold / below or equal to threshold): above or equal to threshold means that increasing values crossing a threshold will trigger alerts
  • Type (green/red or green/yellow/red, with or without hysteresis, or info only): Depending on the alert type you can define different thresholds

Next, navigate to tab “Scope” in the alert configuration dialog.

Click “Add new scope elements” to select sender objects for which this alert should be considered. Via the check box “select senders dynamically” you can decide:

  • Either explicitly pick the senders from the table in the sender selection (check box not active).
  • Or dynamically select senders based on patterns in the filter criteria (check box active, use * as wild card anywhere). In this case explicit selection in the table is disabled. This option allows to even cover senders which currently do not even exist in the store.

Finally, click “Save” to persist the alert definition.

Alerts will arrive in the Focused Run Alert Inbox (Alert Management on the Focused Run Launchpad) with the Object Type “OpenKPI Sender”.

ABAP Reports

Sender Deletion

The ABAP report RCA_GS_SENDER_DELETION enables the user to delete senders from the Open KPI Store. This includes meta information, metric data and logs. It is irreversible.

  1. Use transaction SE38 to execute program RCA_GS_SENDER_DELETION
  2. Enter the unique sender GUID you want to delete. The F4 Help offers you an easy way to select one or multiple senders from the system.
  3. Hit F8 to start the deletion.

Please make sure that the collector of the sender is disabled as well. As long as the collector is sending data and meta information to the Open KPI inbound processing, the sender will reappear in the scope selection.


Dynatrace Collector

The ABAP report RCA_GS_DYNATRACE enables the user to pull Dynatrace data into the Open KPI Store without creating an own collector. It is mainly meant to be executed as an background job.


After entering the Dynatrace URL, the environment UUID, name and the access token the user can click on the "Load Metric Definition" -button. This triggers the retrieval of the metric catalog from the Dynatrace environment, which is also used for the Dynatrace metrics selection F4-Help.

Additionally to the required fields, the user can also provide a tag defined in Dynatrace to narrow down the object selection on Dynatrace side. Dynatrace tags can be used, e.g., to map hosts and processes to Cloud Foundry spaces.

After loading the metric catalog the user can access the F4-Help to select every metric he wants to pull from the given Dynatrace environment.


Besides the available metric ID and description the F4-Help also shows the sender type this metric will be put into.

Attention: After the selection of a subset of metrics here and executing the program as a background job, it it not possible to change the selection for this environment, tag and sender type anymore. Any changes to the metric selection will result in data inconsistency! We recommend to delete any sender of the sender type in question before changing the metric subset!

If the user wants to maintain multiple environments and/or tags, he needs to create one background job per environment and tag combination.

Dynatrace collector results:

  • For every maintained metric set a new sender type will appear (for example "Dynatrace - Host" for above selection)
  • Based on the data that is coming from the Dynatrace environment a dynamic number of senders will be created in the scope selection (one for each host in our example)
  • For easier selection each sender will have the environment name and tag as an attribute value attached to it
  • Senders in the scope selection are unique for the environment UUID, tag and sender type combination to ensure data consistency.
  • Each sender within a Dynatrace sender type features the same metrics and dimensions.