There are no prerequisite steps to be done in SAP Cloud ALM. You need to have the role / authorization for Job Monitoring in SAP Cloud ALM, and you need to configure in your SAP cloud service(s) the push of job execution data as described under "Setup" below. This initial one-time activity needs to be done in every monitored service.
* additionally:
In below example the linking is restricted to jobs executed in client 200. Please note, that with "Option" [=] the value * will be interpreted as *. For patterns make sure to use "Option" [x].
In the Monitoring UI jobs are evaluated with a default Rating Rule that you can adjust, if you have role Job Monitoring Consumer. Please note, that the setting is not persisted, i.e. if you relaunch the application the default will be applied again. Below you see the default settings.
In case of the ABAP job scheduler there is no job definition with a unique ID. When scheduling a job the user defines what shall be executed, gives a job name and on save this entity gets a run ID. The job name does not need to be unique as together with the run ID it is unique for the ABAP system. At execution of the job the next occurrence is determined, and the job data is copied to the next due date and gets a new run id value.
To be able to group the executions of a job for Job & Automation Monitoring a stable ID is formed by the data collector as a hash value:
ID |
Hash of |
JobId |
jobname + client + number of steps + job metadata[(step no + program + variant)] |
If a job constantly changes its name (e.g. job name contains a time stamp), the formed JobId hash value is not stable, which does allow to group executions under one JobID. To enable the grouping for such a case, the JobId hash value is formed without the job name and on the monitoring UI, such a situation is indicated as follows:
You have the option to change the name given by the data collector (by pressing the pencil icon), so that the name of the job represents all related job executions.
Note 1: The actual (i.e. current) name of the job is displayed in the executions view and is also included into the notification mail. Using the filter option you can find the job in the list of jobs with the actual name.
Note 2: ABAP jobs with differently changing names that execute the same (i.e. client + number of steps + job metadata[(step no + program + variant)] are identical) are grouped together.
To configure events for alerting for specific jobs, you need to have role Job Monitoring Administrator assigned. Expand the configuration pane and then access the service, for which you want to configure alerting.
Per service you can define events of category:
If you want events to be raised only for certain jobs you can make use of the filter options and specify the type of the job, job ("Job / Automation Type"), the name of the job ("Job / Automation Name"), what is being executed ("Job / Automation Executable Name") or who is executing the executable ("Job / Automation Execution User"). The filter conditions for the different parameters are connected with AND for the same parameter with OR.
The filter options offered are "Is", "Is not" and "Contains". You can enter multiple values in one field. The entered value is interpreted as a string, if for example you want to get alerted:
To get alerted, make sure to switch on Create Alert and save. If you want to stop to get alerted, you can switch off the Event Action, or deactivate or delete the event in the list of events. Note that events are not raised for services, if those are in Maintenance or Disruption state (maintained in Business Service Management).
In order to send notifications for alerts to an email address,
To get notified on every job execution failure, set "Raise Event Action" to "At Every Occurrence".
In SAP Cloud ALM Job & Automation Monitoring data reorganization is as follows:
If the "Type" of a job or automation is missing, the reason is that run information exists without definition data.
In case of ABAP jobs, if the "Type" is missing for a small part of the jobs only (e.g. 1%), this could be a temporary issue, that disappears after one day as then resending is done. If the issue persists then the reason could be a bug in the data collector.
In SM37 all jobs are displayed, whereas in the monitoring application jobs that execute report RSPROCESS are not displayed. The reason is, that those jobs are elements of BW Process Chains. They are displayed as executables of the related BW Process Chain (job type SAP BW Process Chain).
With note 3102288 (version 31) parent PPF jobs are collected, but not the child jobs that execute reports RBANK_PROC_START or RBANK_PROC_END. Those child jobs are consequently invisible in the monitoring application. The reason for not collecting their execution data is, that the parent job runs until all child jobs have finished and if any of the child jobs fails, the parent job has a message in its job log. If the collector (as of ST-PI 7.4 SP20) finds such messages, it rates the parent job yellow.
Configuration to pause and restart data collection is offered by Job & Automation Monitoring, but depends on implementation in the managed system or service.
Question: We have activated job monitoring using transaction /n/sdf/alm_setup and see ABAP jobs, but the Application Status appears with GREY rating for all jobs.
Answer: If all ABAP jobs appear with GREY Application Status rating, probably in the managed system "Auto-linking Applog-Handles for jobs" is either not activated or activated, but not properly configured (see information on CRIT maintenance above).
Please note, that most ABAP jobs do not write application log messages and will be rated with GREY Application Status rating. If a job writes into the application Log ("SLG1"), then in case of
Question 1: Exception details are not included in the notification mail for failing ABAP jobs (and the Exceptions view does not contain Job Log and Application Log information).
Question 2: Exception details are sometimes not included in the notification mail.
Answer 1: Please make sure that the prerequisite for exceptions to be collected for ABAP jobs is fulfilled:
Answer 2: Please make sure in /n/sdf/alm_setup that Exception Monitoring is collected with the same frequency as Job Monitoring.
Assuming that
A downstream action (like email) is triggered when there is a change in the event rating to yellow or red, if "Raise Event Action" is "At new occurrence".
"At new occurrence" is the default to avoid notification flooding for frequently running jobs, that start to fail. You can change the setting to "At every occurrence", to get notified, whenever an execution fails.
Issue: In the Additional Alert Key we see job name FRN_ARP_0000001242. However, this job does not exist. The actual job names, that we see in SM37 are:
Answer: As explained in the Configuration section under Option to Set Name of Job with Changing Name: In case of the ABAP job scheduler there is no job definition with a unique ID. To be able to group the executions of a job for Job & Automation Monitoring a stable ID is formed by the data collector as a hash value that contains the job name. If a job constantly changes its name (e.g. job name contains a time stamp), the formed JobId hash value is not stable, which does allow to group executions under one JobId. To enable the grouping for such a case, the JobId hash value is formed without the job name and on the monitoring UI you have the option to give to such a job a name that represents the group of jobs. It is this name, that will be used at alert creation.
Still forward navigation from the alert details via URL to the job in the managed ABAP system is supported.
Issue: We monitor ABAP jobs and in alerting or monitoring we click on "Run Id" value to navigate to the job in the executing system. The first part of the URL that is opening is not the correct one, as it goes directly to the URL of the Application Server. We are using SAP Web Dispatcher so in order to function correctly, it will need to access the URL of the SAP Web Dispatcher.
Solution: Edit in Landscape Management the system and maintain in field "Logon URL" the URL of the SAP Web Dispatcher.
Symptom: In the Job & Automation Monitoring application, ABAP jobs /UI5/UPD_ODATA_METADATA_CACHE and /UI5/APP_INDEX_CALCULATE are displayed with red or yellow application status. When navigating to Exceptions view I can see no exceptions.
Reason: UI5 jobs delete the application log errors that they write quickly as the developers think that a longer persistence of the logs will lead to problems. As a consequence the exception details cannot be collected by Exception Monitoring (EXM).
Solution: To see the details of the exceptions you need to access the application log directly from SM37, by selecting the latest job executions and "Goto --> Application Log".
Question: After upgrading our managed system to ST-PI SP23 / SP25, we do not get data any more. In our SAP Cloud ALM tenant we have switched off the data collection and then on again, but without success. We do not get job monitoring data.
Answer: With SP23 and SP25, a new version for configuration was implemented to prepare for the option to collect specific job types from an ABAP system. Unfortunately, this requires a switch off and switch on of the job monitoring use case in the managed system (using transaction /n/sdf/alm_setup) after implementing SP23 / SP25.
Question: After August 2023, we do not get job data any more into SAP Cloud ALM tenant via SAP_COM_0527. We have switched off the data collection and then on again, but without success. Collection status goes to red.
Answer: We had an issue with the data collector, which is fixed, but unfortunately, this requires a switch off and switch on of the job monitoring use case in the managed cloud service (in the communication arrangement SAP_COM_0527 via which you push data to your SAP Cloud ALM Tenant).
Question: We see memory dump TSV_TNEW_PAGE_ALLOC_FAILED on managed system because /SDF/CALM_TASK_EXECUTOR is trying to read info for a huge number of work items, e.g. LCL_WI_LOG=>LIF_WI_LOG~READ_FOR_LOGHIST with IT_LOGHIST Table IT_3269[2342109x1226]
Answer: With latest version of ST-PI Job & Automation Monitoring collects by default workflow execution data. In this context a query is made that produces the issue, if you have no archiving for workflows. Please follow SAP Note 3144853 "SWWLOGHIST table size is increasing" and reduce the size of table SWWLOGHIST.
As a quick solution you could switch off data collection on the managed system for Job Monitoring (if you are not actively using job monitoring).
Please note, that we are working on offering to select on Job & Automation Monitoring UI the option to select specific job types for monitoring (i.e. you could then switch off job Type "SAP Business Workflow" until you have an archiving in place, but still monitor "SAP ABAP Job")
Question: I have connected my SAP Intelligent Robotic Process Automation tenant / my SAP Build Automation tenant and see automation executions with start and end time and status displayed in Job & Automation Monitoring, which is nice. However, I wonder about the following:
Answer: What you observe are current short comings of the implemented monitoring. In detail:
Question: Which authorization is needed to be able to access the managed system / service via the navigation URL?
Answer: It depends on the job type. Typically access to the local monitoring UI is sufficient, but there are some exceptions:
Question: I have connected an SAP S4/HANA on-premise system to SAP Cloud ALM and see Application Jobs. I try to do the forward navigation from a failed job instance by clicking on the Run ID number. The URL https://<server>:<port>/ui#ApplicationJob-show?JobCatalogEntryName=&/v4_JobRunDetails/2FBBFAA0D4B4117396897A783D1774E7%20nrBRWzP produces http error "404 Not found". How can I enable the navigation?
Answer: In on-premise systems the Application Jobs tile is not part of any business catalog, so some manual configuration is needed as described here.
Question: For system maintenance, we had suspended all jobs using report BTCTRNS1 and after completion of the maintenance activities, we executed report BTCTRNS2 to release all jobs again and got many delay alerts. How can we avoid this?
Answer: All applications in SAP Cloud ALM for Operations listen to Business Service Management (BSM) and do not raise events for a service/system, if they are in Maintenance or Disruption state, i.e. if you have maintained for a system in BSM a "planned downtime" of such type.
Please note, that after execution of report BTCTRNS2 an ABAP system topically needs some time until the formerly suspended jobs are processed and batch processing is back to normal. So, please add some additional time to the "planned downtime", to avoid irrelevant delay alerts.
Question: After upgrading our ST-PI, we miss in the monitoring view most of the jobs with name "/AIF/*". How come?
Answer: Unfortunately, there were some customer cases, where "/AIF/" job executions, i.e. jobs that execute report "/AIF/PERS_RUN_PACK_EXECUTE" ran in such huge numbers (increasing the total number of job executions by a factor of 100(!)), that the data volume could not be handled by the data collector. As a consequence job monitoring did not work at all.
As AIF development is not able to optimize the number of jobs they create (they create one job per message), we were forced to exclude jobs that execute report "/AIF/PERS_RUN_PACK_EXECUTE" from being collected. Please use "Integration and Exception Monitoring" for AIF monitoring, e.g. ABAP Aborted Jobs filtering on job name starting with "/AIF/".
In case you encounter problems using or setting up Job & Automation Monitoring in SAP Cloud ALM, please report a case using the following component:
Please select your SAP Cloud ALM tenant when reporting the case.