System Analysis Aggregation

System Analysis Aggregation allows to retain selected metric data from System Monitoring (UDM) store beyond the standard retention time of the System Monitoring store. The selection of metric data is defined based on variants. Using these variants, a daily task in Expert Scheduling Management collects and aggregates data from the System Monitoring store and persists it in a dedicated store.  

System Analysis Aggregation supports two different use cases out of the box:

  1. Persistence of aggregated data in dedicated database tables in SAP Focused Run. System Analysis Aggregation takes care of housekeeping eliminating data beyond the configured retention times. SAP Focused Run applications like System Analysis and the System Monitoring metric monitor are aware of the aggregated store, and transparently switch between the UDM store and the aggregated store depending on the time ranges selected for display.
  2. Export of aggregated data into comma-separated value (CSV) files in the file system on the application server. In this case, System Analysis Aggregation does not take care of housekeeping for the generated files and data in these files is not available to SAP Focused Run applications for display. You can engage this use case, for example to export data of the previous work day for consumption by external tools.

Note: “System Analysis Configuration” and “System Analysis Aggregation” will be used alternatively in this document. It will refer to the same application here.

Features

  • Configuration of Variants

Simple to use variant configuration, which allows user to maintain the systems, metrics, target granularity and aggregation store for target execution.

  • Standard Aggregation

Provides aggregation functionality according to the variants configured and runs once daily

Target Group

The target groups are service providers and customers who need to aggregate data beyond the standard retention time of the System Monitoring store for longer durations. This data is intended to be used for reporting and analysis.

Release Notes

General Availability with SAP Focused Run 1.0 FP01. The Release Notes are available in SAP Help Portal of SAP Focused Run.

SAP Focused Run 1.0 FP01

  • ABAP report program for the configuration of systems and metrics
  • Automatic creation of task in Expert Scheduling Management

SAP Focused Run 1.0 FP02

  • A new configuration UI for the maintenance of variants
  • The Expert Scheduling Management task can be created from System Analysis Aggregation
  • Concept of variants introduced which allows user to maintain the systems, metrics, target granularity and aggregation store for target execution
  • Overlap time / Lookback time introduced to define the volume of overlapping data allowed between raw and aggregated data.

SAP Focused Run 2.0 FP01

  • It is possible to activate / deactivate variants one at a time
  • Jump from System Analysis Aggregation to Expert Scheduling Management cockpit
  • The last run timestamp and status of housekeeping job and Expert Scheduling Management tasks is shown in the configuration UI

SAP Focused Run 2.0 FP02

  • Aggregated interval is aligned to a complete day.
  • Task schedule is fixed. Task can run once in a day only and Schedule Immediately option is removed.

Setup

Prerequisites

Expert Scheduling Management application is setup. Refer to “Expert Scheduling Management Cockpit” application under SAP Focused Run Expert Portal.

Setup

Refer to the following sections in the master guide of SAP Focused Run to setup System Analysis Aggregation;

  1. Advanced System Management: Preparing Use. Execute task list “SAP_FRUN_SETUP_USECASE with Variant SAP&FRUN_ASM” to activate the SICF service for System Analysis Aggregation application
  2. “Background Jobs for ASM” to configure housekeeping for System Analysis Aggregation
  3. “System Analysis Setup” for the maintenance of task in Expert Scheduling Management.

Configuration

Note: The below configuration steps are shown from SAP Focused Run 2.0 FP02.

It is possible to configure aggregation of System Monitoring data from System Analysis Configuration application available under Advanced Root Cause Analysis.

Maintain Task in Expert Scheduling Management

If the task is not created in Expert Scheduling Management, then by default the interface pops-up to maintain the task when you launch System Analysis Configuration application. The log store and collection timestamp can also be maintained.

  1. The time schedule period is 1440 minutes (24 hours).
  2. Once the task is created, further changes to the configuration do not take effect.

Variant Maintenance

Note: The screenshots are taken from SAP Focused Run 2.0 FP02.

You will need to maintain the below attributes for a variant:

1  Name
2  Description
3  Overlap Period / Lookback time (in days)

  • This is the overlap of the MAI store with the aggregate store. For a MAI store retention time of 28 days and a look-back time of 2 days the aggregation task will gather data that is 26 days old.
  • This is currently used for the first run of the task only. After the first run, only one following day of data is aggregated (new and old variants), and the start timestamp of the aggregation is calculated based on the last successful run.
  • The data gap needs to be filled through the manual report. See section Manual report.

4  Retention Period (in days):  After this period data will be eliminated from the aggregate store

5 Aggregation Granularity

a. 5 minutes

b. 15 minutes

c. Hourly

d. Daily

6  Customer Network (If maintained empty, all customer networks are considered)

7  Data Store (at least one is mandatory): “Standard Aggregate Table Store” is the only option supported for consumption in the System Analysis application.

8  Technical System (at least one is mandatory)

a. Data Center

b. System Type

c. IT Admin Role

d. Lifecycle Status

e. Extended System ID

9  Metric (at least one is mandatory)

a. Metric Category

b. Metric Name

It is possible to do the below actions with variants with System Analysis Configuration application.

Create: You to create a variant by providing the relevant information needed for the variant. By default, the variant is active.

Edit:  You can edit a variant by providing the relevant information needed for the variant. It is not possible to change the name of the variant. The variant retains the active/inactive status.

Delete: You can delete the variant from the configuration

Calculate duplicates: You can check for duplicates across variants. The check is to find out if Extended System IDs and Metrics are duplicated across variants.

Activate: You can change the state of the variant to active. Only active variants are considered for aggregation during the task run.

Deactivate: Possible to change the state of the variant to inactive. The inactive variants are not considered for aggregation during the task run.

Example: Variant for Aggregation to SAP Focused Run Aggregated Store

Example configuration: You want to aggregate all performance metrics of all customer networks for productive ABAP systems and retain data for one year. Granularity of aggregated data should be one hour. Overlap between System Monitoring raw (UDM) data and aggregated data should be minimal.

Suggested configuration:

  • Name: “Productive ABAP performance hourly”
  • Description: “1-year history for ABAP performance metrics”
  • Overlap Period:”2” – this will minimize overlap between UDM store and aggregated store
  • Retention period: “365” – this will configure housekeeping to cover a period of 365 days with the aggregate store. Data beyond this period will be eliminated from the aggregated store
  • Aggregation Granularity: “hourly”
  • Customer Network: (can remain empty)
  • Data Store: “Standard Aggregate Table Store”
  • Technical System / System Type: “ABAP”
  • Technical System / IT Admin Role: “Productive”
  • Metric / Metric Category: “Performance”

Notes: The overlap period (between the UDM store and the aggregate store) of one day indicates that the oldest data of the UDM store should be aggregated. For a default UDM store retention time of 28 days this means, e.g., on September 9 the data of August 11 will be aggregated. Since the aggregate store is configured to cover a period of 365 days, both stores in combination cover

In System Analysis, you can see 392 days of data as in this example (data from both System Monitoring and System Analysis Aggregation store).
(UDM store retention) + (aggregated store retention) – (overlap) = 28 + 365 – 2 = 391 days


Example: Variant for Export to csv Files

Example configuration: You want to export all exception metrics for HANA systems to CSV files daily for the preceding business day.

Suggested configuration

  • Name: “HANA Exceptions daily”
  • Description: “HANA Exceptions for export”
  • Overlap Period:”28” – this will ensure that the most recent data from yesterday is exported.
  • Retention period: “0” – Not relevant for CSV export
  • Aggregation Granularity: “daily”
  • Customer Network: (can remain empty)
  • Data Store: “Save in Excel”
  • Technical System / System Type: “HANA”
  • Metric / Metric Category: “Exception”

The destination folder for the generated CSV files can be configured via variable SYSANA_REPORT_DIR in transaction SFILE. The user executing the export (typically FRN_BTC_SRA) needs the role “SAP_FRN_AAD_SYA_ALL” to write to the file system.

Recommendation

In the above example, the initial run will cover 28 days of data aggregation which will then be exported to csv. To avoid performance issues due to large number of systems. It is recommended to start with a small metric set for the initial aggregation and increase the number of metrics in the variant for the subsequent task run.

Validation

The status of the aggregation task needs to be regularly monitored in “Expert Scheduling Management cockpit” application under “Infrastructure Administration”. You need to select the “System Analysis” task to see the status.


It is also possible to jump into the task directly from System Analysis Configuration application.

Manual Report

You want to run the report program manually to aggregate data in chunks when the overlap period and the number of the systems are high. In the first run, there are chances that the task runs for a long time or gets banned due to performance constraints. This needs to be done for the period of overlap and suggested to run daily in the background.

Example: If the overlap period is 28 days, then it is recommended to run the report program in the background in the System Monitoring data store 28 times (representing time range of raw data in System Monitoring store).

Pre-requisites

1.        Expert Scheduling Management is already configured.

2.       System Analysis Configuration task is created and at least one variant is created through “System Analysis Configuration”

Pre-run

1.        In the Home page, navigate to “Expert Scheduling Management cockpit”.

2.       Select the “System Analysis” task, click on “Manage” and “Deactivate”.

3.       On the column “Active Status”, confirm if the status of “System Analysis” task is “Task Deactivated”

Run the report

1.        Execute the program “SYSANA_AGGR_EXECUTION”

2.       SYSANA_AGGR_EXECUTION

a.       Input

                         i.      Variant Name (assumption: there are no duplicates maintained in System Analysis Configuration)

                         ii.      Start Time stamp (example: 20190630000000; YYYYMMDDhhmmss)

                         iii.      End Time stamp (example: 20190630000000; YYYYMMDDhhmmss)

b.       Output

                          i.      Records aggregated

3.       Run the report in the background to the times equal to the overlap period. If there is no data for a specific day in the System Monitoring store, then no aggregation will be done.

4.       Verification:

You can check the table “SYSANA_AGGR_DATA” for aggregated data.

Post-run

1.        In the Home page, navigate to “Expert Scheduling Management cockpit”.

2.       Select the “System Analysis” task, click on “Manage” and “Activate”.

3.       On the column “Active Status”, confirm if the status of “System Analysis” task is “Task Activated”.

How to find the oldest timestamp in System Monitoring data store?

Option 1: You can check the report program “MAI_UDM_STORE_PARTITIONING” for the partition settings maintained and calculated the oldest timestamp in System Monitoring data store.

Option 2: You can use the class method “CL_SYA_MAI_UTILS -> GET_UDM_STORE_MIN_RAW” in transaction SE24 to find the oldest timestamp in the System Monitoring data store.

Personal Data

You need to use the below programs to manage your personal data.

SYSANA_PERS_DATA_DELETE – Report program to delete the user if the user already exists in System Analysis Aggregation application

SYSANA_PERS_DATA_USAGE – Report program to check if the user id is maintained in the System Analysis Aggregation application

Correction Notes

Release Relevant Notes
FRUN 2.0 SP00 2874089