The purpose of Job & Automation Monitoring, which is available as of SAP Focused Run 3.0 FP02, is to provide transparency about the status of automation processes regarding Execution Status, Application Status, Start Delay and Run Time with the aim to offer these 4 core operations aspects for all kinds of (scheduled) automation processes (for simplification called “jobs” in the later text), that run on different platforms with a unified user experience using a common look-and-feel and handling pattern.
The basic concept is to collect individual job execution data into the central monitoring application, to correlate it with its job definition and evaluate every execution using historical data. The rating of the last (or latest) execution(s) is propagated to the job level and finally to the service level so that you can easily understand the current state of many jobs that are executed in the different systems. In case of issues with job executions, context-based navigation is offered from the job to the individual execution and finally a direct navigation to the corresponding job execution in the executing system (or service). This is for providing quick access for a more detailed analysis, for example to stop a not ending execution or to restart a failed execution.
Job & Automation Monitoring shall help to understand the current state of the automation processes across all involved cloud services and systems. The aggregated historical data shall help to detect deteriorations to avoid downtimes. On top, in case of a SAP ABAP system it shall help you to efficiently monitor so called Standard Jobs, which are those jobs that are scheduled by the SAP S/4HANA Technical Job Repository (see SAP Note 2190119) or in lower releases by the SM36 Standard Job framework.
Jobs and workflows are executed in the managed cloud services and on-premise systems. The individual executions are captured (start, end times, and status) by the local execution infrastructures this “normalized” run data is sent to SAP Focused Run, where the individual execution data is assembled and correlated with definition data.
Job & Automation Monitoring in SAP Focused Run collects data for these job types:
The Status Overview provides a summary regarding job execution for the systems in scope. It shows the number of systems and the number of automations ("jobs"). The jobs are rated based on the rating of the latest execution:
The Application Overview provides a card-based overview, with more details:
In the Alert Ticker you see the most recent Job & Automation Monitoring alerts for the selected scope.
Please note that alerts, which are currently rated green are not shown in the alert ticker, whereas Job & Automation Monitoring considers all alerts that are not yet confirmed. Consequently, the alert number in the Status Overview section is typically higher than in the Alert Ticker.
The jobs are listed and rated regarding Execution Status, Application Status, Start Delay and Run Time. The rating reflects the rating of the respective aspect for the latest execution of the job.
To drill-down to a failed execution of a job in the remote system:
Alerts are raised for every distinct job definition name. You can get alerted, if a job execution has a red Executions Status, a red or yellow Application Status, a Start Delay or Run Time that exceeds the threshold that you have set.
The alert Additional Key contains the Name, Type and Context/Client of the job. If the alert has been defined for a group of jobs, the alert Additional Information contains the name of the job group. In the METRICS table you see the alerted executions of the job and have a direct link to the managed system like from the monitoring application.
In view Exceptions detailed information on exceptions is offered.
The exceptions overview offers you for ABAP systems the exception categories offered are “ABAP Application Log” and “ABAP Aborted Jobs”. You can drill down into the details, i.e. to the single exception.
To get exception detail information from an ABAP managed system without specifying the SLG1 object and with a link to the corresponding job execution the prerequisite is ST-PI 7.40 SP20 (or higher). To get the collected exceptions transferred to the SAP Focused Run system you need to activate in Integration Monitoring configuration the monitoring of the category in the monitoring client (see more details in chapter Configuration).
With above points fulfilled, the collected exceptions get a correlator value so that from Job & Automation Monitoring navigation from a job execution to the related exceptions is possible.
If there are exception details linked to a job execution, then this is indicated with a slightly changed icon as displayed below and on click on the icon you drill down to the exception messages and see their details. In case of ABAP jobs, the exceptions related to the Execution Status are those of category ABAP Aborted Jobs.
In case of ABAP jobs, the exceptions related to the Application Status are those of category ABAP Application Log.
Analytical information is offered for the jobs regarding number of executions, failed executions and run time. The table is default sorted by Total Run Time. You can change the sorting and apply filters.
On clicking on the graph icon at the end of every line you are navigated to view Trend Graphs, where you can see the values over time.
The data presented is based on hourly aggregates. For a large time frame the hourly aggregates are summed up to daily values automatically for better readability. Via custom time frame you can restrict to a certain period in the past and via the resolution option in the top right corner you can switch between Hourly and Daily.
Flag and save the use case setting.
On activating Job Monitoring in SSI for an ABAP system job data collection is not automatically activated, but needs to be done manually for every system in the Job & Automation Monitoring application via configuration. If you want to automate this, please check for details in Configuration Guide 4.0 SP00 with Use Cases.
SAP ABAP Job and SAP BW Process Chain executions can be collected via the Simple Diagnostic Agent, by switching on data collection for the technical system. In the picture below for the top 4 systems data collection is activated. For the first 2 systems there are issues.
If you switch to "On" as default for job type SAP ABAP Job the job execution data is collected from all clients, but in the detail view you can restrict the ABAP job collection for specific clients and collect data for further job types. Currently SAP ABAP Job and SAP BW Process Chain are offered for Application Server ABAP.
SAP Application Jobs from S/4HANA CE and SAP Intelligent Robotic Process Automation Jobs can be pushed to SAP Focused Run from the respective cloud services via the so-called SAP Cloud ALM Push Proxy.
For a detailed description of use cases, please see see the corresponding chapter of the Configuration Guide 4.0 SP00 with Use Cases.
To configure alerting in case of job failures, expand the configuration pane and then access the Technical System or the Job Group for which you want to configure the alerts.
Job & Automation Monitoring alerts are raised for every distinct job name and job type as those are written into the “Additional Alert Keys” field. Consequently, if you do not maintain filter values in alert configuration all jobs of the system or job group are considered and an individual alert for every failing job is raised.
If you adjust the alert configuration (for example restrict the jobs using the filter options), we recommend adjusting the Alert Name to describe, what is being monitored. Please avoid overlapping the filter conditions for same alert types to avoid redundant alerts.
Job & Automation Monitoring Alerting is integrated with SAP Focused Run downtime management, i.e., an alert for a job is not raised if the executing system is currently in a planned downtime.
The following Alert Types are offered:
An alert is applied to all Jobs / Automations of the system for which the alert is activated unless you make use of the filter option to restrict the Jobs / Automations for example to those that an alert notification recipient is responsible for. For the filter values you can use the conditions "Is", "Is not" or "Contains".
Note: For the alert type Missing Execution of Mandatory Job and for those alerts that you create for a Job Group the filter option is different as in this case a defined set of jobs is considered. Therefore, instead of the above-mentioned filter options you can filter from that set of jobs. If you do not maintain filter values, the complete set of jobs is considered for the alert.
You can maintain the typical SAP Focused Run alert and notification settings:
The alert resolution text is copied over to the alert instance. A default text is offered, that you should adjust, to describe the actions that you want the alert processor to do before turning to an expert.
Prerequisite to get alerted if a scheduled job does not run any more is to flag such a job as mandatory and make sure, that it has a recurrence value.
If you expect a job to run in a system unchanged (with the defined executables and with the defined periodicity) after you have scheduled it, please flag it as mandatory in Configuration in view Job Definitions, which lists all recurring jobs.
Once you have flagged a job as mandatory, it stays in table Job Definitions also if it has been changed or deleted in the managed system. With alert type Missing Execution of Mandatory Job you can get alerted, if the execution of the job is overdue.
For example, you need to make sure that 2 jobs are running unchanged after system setup. As part of system configuration activities, you have
Both jobs are scheduled, and you want to get alerted if these jobs are not running as expected. For this you flag job A and job B in table Job Definitions as mandatory and create an alert of type “Missing Execution of Mandatory Job” and filter for job A and job B.
An alert will be created if the execution of the jobs is overdue. This will be the case, if
Job & Automation Monitoring collects by default all ABAP job executions from the managed systems. Also, for BW Process Chains all chain executions are collected. This is done to give you a holistic overview on the current situation and on trends. If you are responsible for a subset of jobs, you can create Job Groups and launch the application for one or more Job Groups and also configure alerting for the jobs of a group.
You can create job groups in Configuration, add jobs to the group and define alerts for the jobs.
Once job data is loaded from the managed system, you see that Execution Status, Application Status, Start Delay and Run Time are rated. The rating is set according to Rating Rules, for which you can adjust the thresholds for Start Delay and Run Time. Such a change is applied only to the user session, it is not (yet) possible to save the setting.
In case of the ABAP job scheduler there is no job definition with a unique ID. When scheduling a job the user defines what shall be executed, gives a job name and on save this entity gets a run ID. The job name does not need to be unique as together with the run ID it is unique for the ABAP system. At execution of the job the next occurrence is determined, and the job data is copied to the next due date and gets a new run id value.
To be able to group the executions of a job for Job & Automation Monitoring a stable ID is formed by the data collector as a hash value:
jobname + client + number of steps + job metadata[(step no + program + variant)]
If a job constantly changes its name (e.g. job name contains a time stamp), the formed JobId hash value is not stable, which does allow to group executions under one JobID. To enable the grouping for such a case, the JobId hash value is formed without the job name and on the monitoring UI, such a situation is indicated as follows:
In such a case, please give the job a name that represents all related executions (by pressing the pencil icon).
Note: Several ABAP jobs that have changing names and that execute the same (i.e. client + number of steps + job metadata[(step no + program + variant)] are identical), cannot be distinguished by the monitoring application.
For SAP ABAP jobs in many cases the rating of the Application Status is grey. This is normal, if the job does not write any application log messages. To ensure that application log messages that are written during batch execution of a program are linked to the job execution you need to do following configuration:
Please note, that with "Option" [=] the value * will be interpreted as *. For patterns make sure to use "Option" [x].
Any Job Log abort or ABAP Application Log warning or error that is raised by the job execution is collected on the managed system (from all clients) by job monitoring data provider using function /SDF/E2EEM_FEED_LOGS_OF_JOB (prerequisite ST-PI 740 SP20 and higher). To get the message details transferred to the SAP Focused Run system, it is necessary to activate the corresponding agent. This can be done in Integration Monitoring configuration, by configuring category ABAP Aborted Job for collection and category ABAP Application Log for collection for the client in which the job monitoring data collection runs (which is the default client).
Note, that for category ABAP Application Log the system expects you to fill a value for the Object filter, still all Application log exceptions raised at job execution will be collected.
To determine the default client for an ABAP system, check in Simple System Integration (SSI). In below example, for system FT4ADM the default client is 200.
For a detailed description of use cases, please see see the corresponding chapter of the Configuration Guide 4.0 SP00 with Use Cases.
In SAP Focused Run Job & Automation Monitoring data reorganization is as follows:
Note: Please ensure to confirm alerts as alerted job executions are not removed from the data base.
You have switched on data collection for a technical system of Type Application Server ABAP, but the status of data collection stays "in progress" for more than 15 minutes or switches to "failed”.
Background: By switching on data collection for a managed system you have implicitly created a filter for job type SAP ABAP Job and will collect job data from all clients. You might have explicitly created filters, for example to collect ABAP jobs only from a certain client or collect BW process chains. The following artifacts need to work for the collection to happen
If there are exception like “Empty CSRF token” in the threads the reason could be that the properties file is outdated (job monitoring had to update the custom properties). In order to make the agent use the new properties you need to manually reconfigure and restart the agent using the Agent Administration and selecting the technical system or Host.
Issue: In the Additional Alert Key we see job name FRN_ARP_0000001242. However, this job does not exist. The actual job names, that we see in SM37 are:
Answer: As explained in the Configuration section under Option to Set Name of Job with Changing Name: In case of the ABAP job scheduler there is no job definition with a unique ID. To be able to group the executions of a job for Job & Automation Monitoring a stable ID is formed by the data collector as a hash value that contains the job name. If a job constantly changes its name (e.g. job name contains a time stamp), the formed JobId hash value is not stable, which does allow to group executions under one JobId. To enable the grouping for such a case, the JobId hash value is formed without the job name and on the monitoring UI you have the option to give to such a job a name that represents the group of jobs. It is this name, that will be used at alert creation.
Still forward navigation from the alert details via URL to the job in the managed ABAP system is supported.
In SM37 all jobs are displayed, whereas in the monitoring application jobs that execute report RSPROCESS are not displayed. The reason is, that those jobs are elements of BW Process Chains. They are displayed as executables of the related BW Process Chain (job type SAP BW Process Chain).
With note 3102288 (version 31) parent PPF jobs are collected, but not the child jobs that execute reports RBANK_PROC_START or RBANK_PROC_END. Those child jobs are consequently invisible in the monitoring application. The reason for not collecting their execution data is, that the parent job runs until all child jobs have finished and if any of the child jobs fails, the parent job has a message in its job log. If the collector (as of ST-PI 7.4 SP20) finds such messages, it rates the parent job yellow.
There are ABAP jobs, that write messages of type E (Error) or W (Warning) into the job log, without actually aborting. To indicate such a situation the job monitoring application rates such job executions with yellow Execution Status rating.
If you can see jobs being collected, but not those that run more frequently than every hour probably there is a one-hour difference between system time and the time in the system time zone.
Please check this and make sure to set the system time zone (via transaction /nstzac), so that it matches the system time.
Detail information available in Configuration Guide 3.0 FP02 with Use Cases.
Detail information available in Configuration Guide 3.0 FP03 with Use Cases.
Detail information available in Configuration Guide 4.0 SP00 with Use Cases.