Archive for the ‘Configuration’ Category:
There’s not much chance of it getting fixed now, as the new SDN, as a new SDN based on Jive 5, will be going live
before the end of the year. However, the community comes to the rescue, with Sascha Wenninger posting a bookmarklet that is meant to take https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/26224 (with a title of SAP Community Network Blogs) and replaces it with https://weblogs.sdn.sap.com/cs/blank/view/wlg/26224, with the correct title. Unfortunately, his one doesn’t always work. For example, it assumes that the url starts with https, which requires you to logon to SDN before you can run it. So I modified, and present for your edification the Unwrap SDN Blog bookmarklet.
It started with a request to bring a 46C landscape up to date. The starting levels for the Basis, ABA and R3 Support Packages were all at the low 20′s, while the target level for each of them was level 53.
This meant I needed to install about 90 support packs per instance. Comparing the sizes of the Support Packages against the space available in /usr/sap/trans suggested that I might be able to fit everything in without annoying the Storage Management team, if I was able to clean up all the old transports.
Which was where I hit the snag:
zuxdc22:dp1adm 19> tp check all pf=TP_DOMAIN_DP1.PFL
This is tp version 305.13.24 (release 46D) for ANY database
check>Log file is written to /usr/sap/trans/tmp/CHECK.LOG
check>Collected 22 filenames from [/usr/sap/trans/buffer/.]
check>Collected 5 Systemnames from [/usr/sap/trans/buffer/.]
check>Collected 00160 out of 00160 entries from buffer ZP1.
check>Collected 01233 out of 01233 entries from buffer TP1.
check>Collected 03037 out of 03189 entries from buffer PP1.
check>Collected 00094 out of 03254 entries from buffer QP1.
check>Collected 00023 out of 02671 entries from buffer DP1.
check>Collected 04547 entries from buffers
check>Collected 5082 filenames from [/usr/sap/trans/cofiles/.]
check>Found 3 invalid filenames on Cofile-directory
check>No Cofile found for TA STOPMARK
ERROR: A target system group (/U9C_ALR/) is used with a name longer than 3.
This is only possible with NBUFFORM=TRUE!
ERROR: EXIT(16) -> process ID is: 87782
tp returncode summary:
TOOLS: Highest return code of single steps was: 16
ERRORS: Highest tp internal error was: 0204
tp finished with return code: 204
parameter is missing
However, when I checked the domain profile TP_DOMAIN_DP1.PFL, the values for NBUFFORM (and a related parameter, CTC) were set correctly….
TRANSDIR = /usr/sap/trans
DP1/CTC = 1
DP1/DBHOST = zuxdc22
DP1/DBNAME = DP1
DP1/DBTYPE = db6
DP1/NBUFFORM = 1
But that’s OK – This problem (NBUFFORM and CTC are set correctly, but don’t take effect) will probably be fixed when I upgrade the kernel, which I’m going to have to do as part of the Support Pack upgrades. But I need to upgrade the kernel when I upgrade the Support Packs, and I couldn’t reliably do that until I cleaned out the transport directories. Which required an upgrade to the kernel, ….. and of course what happens if the kernel upgrade doesn’t fix the problem ? I needed another solution.
Sometimes you need more than SAP knowledge to get things going. At this point, I knew there was at least one ‘invalid’ Target System Group in the transport directories, with at least one transport using it. So I decided to find out what that transport (and any others with the same Target System Group !!) was ….
zuxdc22:dp1adm 21> cd ../cofiles
zuxdc22:dp1adm 22> pwd
zuxdc22:dp1adm 23> grep U9C_ALR *.*
K111738.DP1:HERMANNMA K /U9C_ALR/ 3 0 0 0 0 0 0 0 0 1 46C . 0 0 0 0 0 000
Remembering that the contents of the /usr/sap/trans/cofiles directory are text files (the /usr/sap/trans/data files are binary), I was able to edit the cofile for the transport in error (I used vi because this was on an AIX system).
zuxdc22:dp1adm 24> vi K111738.DP1
zuxdc22:dp1adm 22> pwd
zuxdc22:dp1adm 23> head K111738.P9C
HERMANNMA K U9C 3 0 0 0 0 0 0 0 0 1 46C . 0 0
0 0 0 000
I corrected the transport in error, and reran tp check all to see if there was anything else in error, before running tp testold or tp clearold.
This is a fairly esoteric example of where pure SAP skills won’t help with an SAP related problem. It was actually worse than I’ve described above, as my second run of tp check all highlighted a Target System Group that had 45 transports belonging to it. I fixed these, thinking if there were any more errors, I would have to find a different way to approach the problem, but they were the last errors.
Depending on the number of errors, I would also look at installing the latest copies of the tp programs and modules in a separate directory. Without having gone through it, I can’t think of any logical problems, but it would have been an interesting exercise… It may have been more time consuming, though, which also needs to be taken into consideration. For what its worth, the way to check the release level of the tp program is described in OSS Note 155350.
When have you had to go above and beyond SAP, to get the job done ? What non SAP skills do you get to use on a regular basis in your SAP work ?
A standard BASIS problem is the generic “what is it doing and why ?” question. This could be in the context of debugging a program or process, or trying to work out what configuration changes are required to make something work. It generally occurs when the development or functional team have moved on, leaving someone who knows what to do but not why – usually a user (under pressure from their boss) who just wants to get the system doing what they’ve been told it should be doing….
However, your BASIS team (or person) has to be a jack of all trades, with not just a smattering of SAP functional knowledge, but also a working knowledge of Networking, Desktop PCs, the Operating System(s) and Databases(s) their SAP systems are running on and so on.
I’ve found that the best way of dealing with this need to know something about everything is not by trying to know everything, but by knowing how to find out everything. An example of this is comes from Jerome Mungapen’s SAPLOG, where he provides a useful reminder of some of the various ways of finding what tables and fields lie behind an SAP transaction:
Have you ever been frustrated trying to find which table and field a piece of data is stored in. You can see it on the screen, and the old faithful F1 – F9 results in some useless structure information. Or have you ever started looking at a piece of functionality you are unfamiliar with wanting to find the table structures behind it in SAP. Well this article shows my favorite five ways of digging under the hood to find out what’s going on.
Jerome lists five methods, but one of them assumes you have the time (and need) to get really in depth knowledge of a given area of SAP. I’ve listed the four methods I use (plus Jerome’s extra one) in the order I’ use them when closely examining or debugging a transaction I’m unfamiliar with.
Use a Different Field
If the technical information pop up shows a structure and not a real field, just try another field on the same area of the screen. It is surprising how often this works !!
Use Where Used on the Data Element
From the technical information pop up, select the data element then press Navigate to get to the Data Dictionary. Once there, press the Where Used button.
Transactions SE30 Runtime Analysis and ST05 SQL Trace can be over-kill for determining what fields and tables are being used, but can be used to see how (for example) configuration data controls how and / or when the fields and tables are updated. It’s also useful when dealing with Z or Y code, structures and tables.
SE80 Object Navigator
This is probably more useful for a functional person, and is not available on the older SAP releases anyway. However, if you know the program behind the transaction, you can use SE80 to find all the Data Dictionary objects (including tables and fields) associated with that program.
For those requiring a wider understanding of how a given area works in the SAP system. Jerome’s explanation of Environmental Analysis says it all.
You can provide ABAP users with a modified version of the standard SAP main menu without affecting the original SAP area menu S000.
For example, say you have created a transaction code called ( z123 – My Own Report ) and you want to insert it under Administration. The specific user will be able to access My Own Report by clicking Administration -> My Own Report.
- Use Transaction SE43 – Area Menu
- Click the copy button. Copy from S000 to ZMGE
- After copying, click Change (area menu ZMGE)
- Double click on Administration and add in your transaction code in the AreaMenu.
- Remember to Activate the new menu !!!.
- Goto Transaction SU01 – Maintain users
- Type in the user name and click the Defaults button
- Type in the new area menu (ZMGE) in the Start Menu field and Save
- The user will be able to see the additional transaction on their next logon.
Reporting Tree Integration
Prior to release 4.6A, only transactions could be put in to Area Menus. From 4.6A onwards, you can also put all the types of reports which are in reporting trees, in Area Menus. The system automatically assigns a transaction code to call the report from the menu. Please note that if you have already put the report in another Area Menu, no new transaction code is generated; You must use the unique transaction code already assigned.
The old Reporting trees could only be displayed, not maintained. To modify the contents of reporting trees, you had to convert them with a migration transaction (RTTREE_MIGRATION). You could then modify the contents with the Area Menu maintenance transaction.
Advantages of the new Area Menus
The new data structure has the following advantages:
* Delinking by reference technique
You can construct a menu from submenus which are maintained separately in different systems.
* Less restrictions
The new area menus have no nesting level limit like CUA menus. The allowed length of menu texts has increased to 75 characters.
OSS Notes: – these will require a valid OSS ID
Note 632357 – Backing up Livecache data for SCM 4.0 or higher
Note 541644 – Backing up the data from the Livecache for APO 3.X
One of the issues when copying SAP systems that have external data, whether it’s for regression testing or any other purpose, is making sure that the external data is consistent with the SAP data.
APO / SCM systems are one such example, where most data is stored in the SAP database (supported by an Oracle, DB2, SQL Server etc database), and some is stored in a
The SAP Livecache technology is an enhancement of the MaxDB database system that was developed to manage complex objects (e.g. in logistical solutions such as SAP SCM/APO). In these systems, large volumes of data must be permanently available and modifiable. One of the features is that in an optimally configured SAP Livecache database instance, all data which needs to be accessible is located in the main memory.
As of SAP SCM 4.0, the /SAPAPO/OM_LC_DOWNLOAD_UPLOAD program can be used to extract all transaction data (orders and stocks) from the APO applications (SNP, DP, PP/DS, CTM, ATP, TP/VS, and so on) in the Livecache and store it in the SAP database.
This ensures, so long as no updates occur in either source database, until the database copy is complete, that the SAP and Livecache databases can be consistently copied to another system. Once the SAP database is reloaded in the target system, the /SAPAPO/OM_LC_DOWNLOAD_UPLOAD program is used to reload the Livecache data into the target Livecache database.
When you run the /SAPAPO/OM_LC_DOWNLOAD_UPLOAD program (via transaction SE38), you will see that the program is divided into four sections:
Section A: Preliminary tasks (prior to the download)
Section B: Download (storing the transaction data in the APO database)
Section C: Upload (copying the master data and transaction data from the APO database to the liveCache)
Section D: Postprocessing tasks (perform these sometime after the upload)
Each radio button takes you to the appropriate transaction to execute the required task. Perform them in order, from A.1 to B.7
Once you have reached step B7 perform your SAP database backup, and build your target system.
Once SAP is running on the target system, and before commencing the reload of the Livecache databse from the SAP database, you need to ensure that the target SAP system is pointing to the target Livecache system. Use transaction LC10 to connect the SAP and LiveCache databases correctly.
Note that there are multiple connections to modify, so make sure you do this for each connection.
Once this is completed, you can perform steps C.1 to 13
1) You need to have release SCM / APO 4.0 or higher to use this program. If you use APO 3.X, see OSS Note 541644.
2) If you intend to upgrade (for example, SCM 4.0 to SCM 5.0) at the same time, then you must not use the /SAPAPO/OM_LC_DOWNLOAD_UPLOAD program. Instead, folow the upgrade guide and use the appropriate upgrade program.
3) If you’re using the Rapid Planning Matrix application, only the status matrix is extracted because all other data can be regenerated using requirements planning (the alternative, of saving all of the RPM data, would take much longer).
It’s a common problem, and most Functional SAP people know how to deal with it, but just in case…. My customer wanted to modify table V77RCF_USR_SGRP (User Support Group in E-Recruitment) in a production system. SAP does provide this functionality for a subset of customisation tables, but occassionally (especially in newer releases) some get left out. You may also have a custom development that requires this functionality on an extra table.
As of Release 4.6 you can maintain this setting from directly within the IMG. Position the cursor on the corresponding IMG activity and select the menu options “Edit -> Display IMG activity”. On the following screen, select the tab page “Maint.objects”. There you can see a list of the assigned Customizing objects. By double-clicking on the corresponding line, you navigate to the Customizing object and can directly set the flag ‘Current settings’ there.
As an alternative you can also call Transaction SOBJ., to directly access the Customizing object, to set the flag directly.
The SAP code behind this assumes that the Client Role ( transaction SCC4 ) of the client you are working in is set to Production. For other Non Modifiable systems (where Client Role is Test, Demo, etc), you need to deactivate the transport connection for that particular object (if possible) as well.
As of Basis Release 4.6, position the cursor on the corresponding IMG activity and choose Edit -> Display IMG activity. On the following screen, select Maint. (Before Basis Release 4.6, position the cursor on the corresponding IMG activity, and choose Goto -> Document attributes -> Display.)
On the following screen, choose Objects in the area Technical attributes. In both cases the system displays a list of the assigned Customizing objects. The types “V” (View) and “S” (Table (with text table)) stand for view maintenance transactions, while type “C” stands for a view cluster transaction.
For type “V” and “S” objects, the transport connection for the view or table can be deactivated as follows:
For type “C” objects, you can deactivate the transport link by turning it off for all related views or tables. Follow the steps below:
Now the Customizing object is no longer part of the transport connection and so is excluded from the changeability check.
Perform these changes in you development / customisation system, and transport through to production.
The change is active in all clients of the system.
You can also change the Customizing object in a locked client (independent of the client role).
Once the above steps are done, it is no longer possible to manually transport entries of the view or table.
I had one of those ‘doh’ moments during a recent SAP performance performance tuning workshop. The instructor, Tim Bohlsen, pointed out a remarkably easy way to discover how large a table buffer that a running ABAP WAS system instance requires to reduce buffer swaps to zero.
This is important because the easiest way to reduce your database I/O in ANY application, SAP or not, is to reduce the need to go to disk. Keeping data in the Application buffer improves response time by reducing the time (both the CPU time and the I/O time) requiried by the DBMS to continually retrieve that data.
In the case of an ABAP engine, you use transaction ST02 to determine if there is any swapping going on in the first place. In the case shown below, both table buffers have some swapping – it is a relatively well tuned HR/PY system, so there isn’t much table buffer swapping despite the sytem being up for two months. Oh, and there isn’t much point in doing this on any other system except the one you wish to tune as it will be extremely difficult to replicate the load of the target system.
Select the images to open larger versions in another window or tab
In this case, we will look at the Generic Key Buffer, since it is the the worst of the two Table Buffers. Selecting the buffer in question, by double cliking on the line, results in a screen showing a little bit more detail. This has some usefull navigation features. As shown below, we are looking at the current status of the buffer, but we have the option to look at the history of the buffer. This can give us an idea of when the swaps occurred, which we can then track back to certain workloads. Moe importantly, we can look at the current status of the individual objects in the buffer.
Now we have the statistics for individual tables (or parts thereof ) that are currently loaded into this Buffer. This data is usefull in and of itself, which I will touch on in a later post, but first, select the Next View button.
The value highlighted below is the total value for Size maximum [bytes]. This is the sum of the highwater mark for each table that has been loaded into the buffer so far. In other words, the amount of storage required to accept all data requests that should be buffered, without swapping, since the instance was started.
Now, you could put this value straight in to the appropriate profile parameter and restart your system, but there are a couple of caveats.
- If a table is marked to be buffered, but has not been read yet, it will not be included in the buffer or, therefore, the buffer size yet,
- You need to examine the detail of both the snapshot and the history to determine if the correct tables are buffered or if they are correctly buffered (the Invalidations total suggests that there is some work to do in this area), and, most importantly,
- This does not tell you if you have sufficient storage available to fulfill any increase in the buffer size without causing problems elsewhere
So, make sure your system has been through a pay run, or a month-end (or whatever the appropriate business cycle is) before you use this method to measure the requirement,
use sappfpar to validate the storage requirements of your new profile parameters, and
be aware that this is only the first step towards efficient use of all of the available resources.
This won’t fix all your performance problems. However, it is an important first step. Your database vendor may make the most efficient database engine there is, but calling any DBMS to get data will always be slower than getting that data from memory.
Actually, its a bit of a cheat. What happens is that you’re telling the J2EE WAS that if there is no page specified (such as …/index.html), then open the page …/irj/portal.
1. Go to j2ee visual administrator
2. For each Server, Navigate to Cluster-> Server -> Services -> HTTP Provider
3. Enter /irj/portal in the Start Page Text Field
4. Click on Save Properties
5. Restart this service from visual administrator
Access http://yourserver.yourdomain.com and your portal login page should come up.
This means that your SAP J2EE Engine Start Page will still show up if you http://yourserver.yourdomain.com/index.html
I’m posting these links for myself and anyone else who may be required to lead or assist in an upgrade to ECC6. They point to blog entries the SAP Developer Network, which is an SAP sponsored and developed community site for all things SAP.
How To Tackle Upgrades to SAP ERP 6.0
This blog addresses frequently asked questions about the upgrade to SAP ERP 6.0, asked by customers at user group events, projects, and other occasions. Mar. 20, 2008
How To Tackle an Upgrade (2): Technical Upgrade
In this blog, Martin Riedel, Senior Vice President and head of the SAP Global Upgrade Office, addresses frequently asked questions from customers about upgrading to SAP ERP 6.0. The questions have been gathered at user group events, on projects, and during the course of other occasions. Part 2 focuses on the technical upgrade. Mar. 31, 2008
How To Tackle an Upgrade: Implementing Functional Enhancements
The second phase of the upgrade approach is to implement functional enhancementes. This blog post gives an overview of this phase and explains how crucial project management is. Apr. 13, 2008
How To Tackle an Upgrade (4): Implementing Strategic Enhancements
Part 4 of this blog describes how an upgrade to SAP ERP 6.0 provides the perfect basis for enabling your IT landscape for enterprise service oriented architecture (SOA) and what’s in it for you. Apr. 28, 2008
How To Tackle an Upgrade (5): Upgrade Services for the Planning Phase
This blog post describes which services can assist you in the planning phase of your upgrade project to SAP ERP 6.0 May. 13, 2008
How to Tackle an Upgrade (6): Enhancement Packages for SAP ERP
Part 6 of this blog series focuses on some of the most frequently asked customer questions regarding enhancement packages, support packages, and upgrades. May. 20, 2008
SAP Upgrades (7): Customers’ Experiences and Pain Points – What about Yours?
Part 7 of this blog gives you an insight into customer feedback data about SAP upgrades: What are the main challenges and pain points when planning and performing an upgrade? Jun. 21, 2008
SAP Upgrades: When Should my Organization Convert to Unicode?
Part 8 of this blog answers one of the most frequently asked customer questions: When upgrading to SAP ERP 6.0, do we have to convert to Unicode? Jul. 5, 2008
An SAP event is a “flag” that is created by using transaction SM62 View and Maintain Background Events. The commonest use of Events is to trigger jobs. Events can be triggered from the Operating System or from within SAP – either within ABAP programs and Function modules or from Transaction SM64.
Create an Event in transaction SM62. Select the Maintain radio button next to User Event Names and execute. This will present tyou with the Edit User Events screen. You can add change or delete user events from here.
To use this Event as a trigger, create a job via transaction SM36. You specify the Event that will trigger this job using the Start Condition button. On the Start Condition screen, select option AFTER EVENT. The After Event fields will open for input. Fill these in and Save.
You can see which jobs are waiting for events by looking at table BTCEVTJOB (via transaction SE16). This is the system’s way of keeping track of which jobs are in the queue waiting for an event to occur. The EVENTID column contains those Events that will submit a Job. You should see several System Events here.
Now, once the event is triggered this newly created job will execute. The event can be triggered via transaction SM64 or from the Operating System (see below).
To get the event triggered from the Operating System, log into the <sid>adm user id (at the Operating System level) level and go to directory /usr/sap/<sid>/SYS/exe/run. Note that in the Unix / Linux implementations, there is an alias ‘cdexe’ that will get you there as well. Run the SAPEVT executable as follows :
sapevt <event_name> -t pf=<instance_profile_directory_and_name> nr=<sys_number>
sapevt roberts_test -t pf=/usr/sap/DEV/SYS/profile/DEV_DVEBMGS00_server001 nr=00
This will raise the event, and cause the job scheduled within SAP to execute. Once the job has executed the SAP event that was in the table BTCEVTJOB will disappear.
Every time a Background Job is run, an entry is created in table TBTCO. This contains entries such as JOBNAME, EVENTID, EVENTPARM, JOBCLASS etc. This means that once you’ve found your job name in this table (using SE16) you can double click on its entry and see that it was executed via an event.