There’s not much chance of it getting fixed now, as the new SDN, as a new SDN based on Jive 5, will be going live
before the end of the year. However, the community comes to the rescue, with Sascha Wenninger posting a bookmarklet that is meant to take https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/26224 (with a title of SAP Community Network Blogs) and replaces it with https://weblogs.sdn.sap.com/cs/blank/view/wlg/26224, with the correct title. Unfortunately, his one doesn’t always work. For example, it assumes that the url starts with https, which requires you to logon to SDN before you can run it. So I modified, and present for your edification the Unwrap SDN Blog bookmarklet.
SAP has certified the Amazon Web Services cloud as a suitable platform for running production instances of some products. The Amazon cloud is probably the most well known of the Infrastructure as a Service cloud vendors. Before making any sizing decisons or or decisons regarding using AWS for SAP systems, please check the latest version of the Operating SAP Solutions on AWS White Paper (PDF). This details the special considerations for SAP Systems on AWS, including some Operating System restrictions.
However, there are some other caveats and gotchas that you need to be aware of before putting any system (SAP or otherwise – even your Development, Testing or QA instances, let alone Production instances) in any cloud environment. It is sometimes tempting, even at a very high-level, to think of cloud based infrastructure as a form of what used to be called remote computing, where the datacenter is located some distance from the users, administrators and developers, just much cheaper to use and much quicker to provision. For most parts of an SAP implementation, this does hold true; users connect via NWBC, a browser or the
SAP GUI to a DNS name, and manipulate the information they find – they add to it, update it, share it, regardless of where it’s stored and the computer(s) used to perform the work.
However, this does avoid a key concept of Cloud computing which the idea of commodity virtualisation of everything. So, bearing this in mind, let’s explore some important lessons about Cloud Computing.
Lesson 0: Only the paranoid survive
Andrew Grove was chairman of Intel when he published a business book called ‘Only the Paranoid Survive’. It sounds like an awfully cold way to deal with business colleagues, but when it comes to down to me and the computers, it has been a useful one.
Lesson 1: SLAs Are Meaningless
You can’t compare any kind of hosting services based on their advertised SLAs. Instead, base your comparisons on their response to you and your company’s issues. Regardless of what they say, ‘stuff’ will happen. Yes, Amazon has a service level agreement for EC2 of 99.95% uptime, averaged over the last year. You would imagine that this was set (by Amazon) based on historical information. However, as they say in the financial pages “historical behaviour is not an indicator of future performance”. And when ‘stuff’ happens, where are you in the queue, for personal attention, recompense, or even just a communication of some sort ?
By the way, due mainly to the recent outage, EC2′s uptime over the last year is around 99.5%.
Lesson 2: YOUR Architecture CAN save You from Cloud Failures, but …
Disaster Recovery processes have two major SLAs; the Recovery Time Objective, which is a duration of time (an SLA, really) within which a business process must be restored after a disaster (or disruption), and the Recovery Point Objective which describes the acceptable amount of data loss measured in time. By the way, the O stands for Objective, not Agreement or Mandate (see Lesson 1).
This means that if an instance becomes unavailable to the business, they want a working system within the RPO time, with data loss of less than the RTO. This requires the same thinking and planning that goes into Disaster Recovery planning for an in house system. In turn, this means managing and planning for Disaster Recovery and Data Security, and allowing for the typical requirements of a Disaster Recovery Plan, except with a Cloud twist to them…
- You still need to choose the right infrastructure,
i.e. Does your vendor have seperate physical locations ?
- You need to manage your view of the infrastructure,
i.e. How easy is it to transfer backups from one physical location to another ?
- You still need to test the transfer of backup data,
- You still need to test the restore / restart of your system in the alternate location,
- Your vendor may provide alternate physical locations,
but do you have / need an alternate provider ?
- and so on.
Lesson 3: There is a BIG difference between virtual machines and the hardware.
Things get a little more difficult at the micro level. Fault-tolerant environments are a centerpiece of the cloud hype, but generally, most developers don’t see, and therefore don’t think, about the difference between virtual and physical hardware. The issue with virtual machines (in-house virtualisation or clouds) is that the view from the operating system ends at the hypervisor. You can not see what happens at the metal. Now, for computer systems to work as we have grown to expect, certain things are sacrosanct. This is because without them, there is no guarantee that what we write will be there when we go to read it (this applies just as much to memory as it does to disk).
An example is the sync() or fsync() system call, that instructs the Operating System to write all the data currently in the filesystem buffers, out to disk. Now, in virtual machines, whether or not fsync() does what it should is a bit of a mystery. In fact, there has been suggestions that in particular circumstances and under high load Amazon’s Elastic Block Store, at least according to sources close to Reddit, will happily accept calls to fsync(), saying that the data has been written to disk, when it may not have been.
No amount of virtual architecture is going to save you from virtual hardware that lies.
Lesson 4: You don’t HAVE to put ANYTHING in the cloud.
The general rule is that if the machine / image dies, then you must be able to recover data, or restore the service. If you’re hosting a database server, then it will need to be restored or recovered. On the other hand, an application server is much simpler; just write some configuration files. Once you start looking at it like this, it may make sense for a more risk adverse site to put some server types into the cloud and leave others in the data centre. In short, Virtualisation and Cloud computing is not a universal panacea to hardware resource problems.
Of course, many people would say that “commodity” computing is a misnomer, because servers are not really something that should be commoditized, that a “pick one of four sizes” offering is insulting. To a certain extent this is true, but Cloud computing servers are so cheap that you can build around inefficiencies in some parts of the commodity offering by overcompensating in others.
For example, once people realise how cheap CPU and Memory are on IaaS services, they tend to go at least one ‘size’ higher than they would for an in-house server, and they still see massive savings. Regardless of what the purist thinks, it is becoming much more business-efficient to throw hardware at performance problems than it is to spend time investigating the root cause, which leads into …..
Lesson 5: You still need to tune and manage your systems.
In Cloud computing costs are tied directly to resource usage. The virtues of cloud computing are a double edged-sword; Because
provisioning systems is so easy, you may see developers running a dozen tests at once, instead of one after another, to speed up implementation cycles. This means any inefficiencies in the base systems used for such
testing will be magnified, which will directly impact costs.
Just as importantly, resource usage variations in your production systems will show up directly in the bill. However, the customer or business user paying the bill will want to know why these variations have occured. Are they due to different processing rules, different volumes,
program or system changes ? You want to see a consistent relationship
between the business workload and the resource usage (and therefore
cost). This makes budgeting and planning much easier for the Business,
and provides them with confidence in both the SAP support teams and the
Lesson 6: It is not enough to be secure….
…you need to be seen to be secure. Amazon already performs regular scans of the AWS entry points, and independent security firms perform regular external vulnerability threat assessments, but these are checks of the AWS infrastructure (such as their payment gateways, user security and so on). They don’t replace your own vulnerability scans and penetration tests. Because it may be mistaken as a network attack, Amazon ask to be advised of any penetration tests you wish to perform. These must be limited to your own instances.
Being seen to be secure also means using all the features (including the Amazon Virtual Private Cloud) that are referenced in the AWS Security White Paper. This document, which is updated regularly, describes Amazon’s physical and operational security principles and practices.
It includes a description of the shared responsibility for security, a
summary of their control environment, a review of secure design
principles, and detailed information about the security and backup
considerations related to each part of AWS including the Virtual Private
Cloud, EC2, and the Simple Storage Service,
The new AWS Risk and Compliance White Paper
covers a number of important topics including (again) the shared
responsibility model, additional information about the control
environment and how to evaluate it, and detailed information about the AWS
certifications. Importantly, it also includes a section on key compliance
issues which addresses a number of topics that get asked about on a
There are differences between managing real servers, virtual servers and Cloud based servers. However, much of what is required for SAP landscapes and Implementations is the same which ever platform you use. In fact the BASIS team may be the only people who notice the difference. One of the biggest differences is the perception of control and ownership, because you can’t “hug your server” any more. What are the biggest differences you see, and how do you see them impacting you if or when your organisation starts implementing SAP systems in the Amazon Cloud ?
While ‘resting between engagements’, I took the opportunity to install and configure a Solution Manager system on a cloud host local to Australia. The main reason was for a demojam entry, but it’s always good to keep my skills up to date. The target system provided was a Windows 2008 R2 system.
Server 2003 R2 and Windows Server 2003 only have functional differences; using the same SAP kernel version, the same service packs, and
the same hot fixes and security fixes. By contrast, going from Windows Server 2008 to Windows Server 2008 R2 requires an updated kernel (see SAP note 1383873 – Windows Server 2008 R2 Support). Now, I could say I was using 2008 R2 for all the right reasons; for example, according to Frequently Asked Questions – SAP on Windows Server 2008 R2:
The main benefits of Windows 2008 R2 are
Windows Server 2008 R2 supports up to 256 logical processors.
- Improved virtualization features
On Hyper-V in Windows Server
2008 R2, the amount of cores supported by the hypervisor (up to 32) has
been enhanced. Another enhancement is Live Migration support by the
implementation of Cluster Shared Volumes (CSV). Virtual Machines can be
migrated without service interruption between the cluster members.
- Power usage
Windows Server 2008 R2 reduces processor power
consumption in server computers with multi-core processors using a
feature known as Core Parking. Core Parking allows Windows Server 2008
R2 to consolidate processing onto the minimum number of required
processor cores, and suspends inactive processor cores. The advantage of
Core Parking over traditional servers is 10-15 % energy saving for the
For a complete list of features, see:
However the real reason was that Windows 2008 R2 was already installed on the server I was using. This became a bit of an challenge, as the Install Kit I used wasn’t actually for 2008 R2 !!. I found OSS Note 1383873 fairly quickly, but even after installing the appropriate kernel as suggested, sapstartsrv.exe (used by the SAPxxx_NN service) would not start correctly. I discovered via google that I needed to install an extra Microsoft c-runtime (vcredist) to run the new SAP kernel.
The reason for posting this as a blog (I’ll also add it to the wiki) is that while I’ve since found out that this is ‘general knowledge’, it wasn’t described in OSS Nnote 1383873 – Windows Server 2008 R2 Support , and in fact, the only reference I found to my symptom was in one line in 1494740- SAP system migration from Windows 2003 to 2008 (R2) 64-bit AFTER I chased the error down through Google.
Are you implementing or using Windows 2008 R2 ? If not, why not ? Corporate standards ?, Lack of product support ? Lack of in-house knowledge ?
An SAP Administrator needs to know about more than SAP; they need to know about the ecosystem that their systems run in. By that, I mean things like how to use features of the Operating System and DBMS that their systems run on, to provide value to the system or business owner. For example, you probably know that drivers, services or software in Windows can crash without you even being aware of it happening. Sometimes this can affect your Solution Manager system, another non SAP part of your Landscape, or you may just want to monitor something like an NSP Developer Edition system. Whichever type of system we are talking about, sometimes the first sign of trouble is when you (or even worse someone in the business) needs the system right now. What would be useful would be a tool that notifys you when certain activities occur……
The Windows Event Viewer lets you launch a program, send an email (if the server has an email client installed) or provide some other alert that something has occurred. You do this by attaching a task to an Event in the Event Viewer. To do this you need to find your Event within Event Viewer. Note there are slight differences in the initial screens between Windows XP, Windows 7, Windows 2003 and Windows 2008.
Once you’ve found the Event you want to report on, look in the right hand panel. There you will see an option Attach task to this Event. Selecting this will pop up a window with all of our options.
For example, we can run a program, send an email (if email software is installed on the Server) or display a pop-up alert.
If you want to run a program, there are some very useful command Line and PowerShell utilities that can come in very handy here. I won’t go into much detail as they are well documented on the Microsoft website, but examples include running the program CMD.EXE with either the /c switch to carry out a command string and then stop, or the /k switch to continue afterwards (see here for more details on command line switches).
You can also use the WEVTUTIL command to automatically poll the event viewer for data and perform actions like creating a log to the Administrator or <sid>adm desktop. This would make it easier to send selected data to second level support or SAP.
You can also use PowerShell command to automatically generate a Windows System Health Report:
Get-RmsSystemHealthReport -Path <drive>:\Report [-StartTime <start_time>] [-EndTime <end_time>] -ReportType <report_type
Any tasks you add can be viewed and edited in the Windows Task Scheduler. The important thing to remember is that being able to add actions to events can be a real time-saver when it comes to diagnosing problems in Windows.
What Operating System DBMS tools and scripts have you found useful for monitoring systems and software ? Are the Windows tools better thean the Unix / Linux tools ? What about agents for centralised monitors ? Which do you prefer ?
Over Christmas / New year, I’ll be upgrading a customer from a very old (as in unsupported by both the vendor and SAP) release of their database to the latest release supported by 46C. As part of the exercise, we are bring the Support Packs (Support Stacks came in after 4.6C) up to date. However, when I loaded the Support Packs into the target system’s /usr/sap/trans, I couldn’t decompress them for processing via transaction SPAM.
I transferred the latest SPAM (SAPKD00040) and the 50 Support Packs (yes, I know) required from http://service.sap.com/swdc to the UNIX server via my PC. When I started decompressing the Support packs on the UNIX system, everything went OK for the BASIS (KB46Cxx.CAR) and and ABAP (KA46Cxx.CAR) Support Packs, but when I went to decompress some of the R3 Support Packages, SAPCAR failed (with a less than useful message).
The tool used to decompress the CAR files is SAPCAR – SAP’s own version of the UNIX / Linux tool tar. I sat back and had a think about what SAPCAR actually does, and what could have gone wrong. My first thought was that I had corrupted the files somehow in the transfer process. I still had the CAR files on my PC, so I downloaded SAPCAR_5-10000854.EXE (4.6D 32-BIT Windows Server on IA32 32bit – a windows compatible version of SAPCAR) to test whether the CAR files on the PC were OK – I went to http://service.sap.com/swdc, selected ‘Search for Support Packages and Patches in the Archive’, and searched for SAPCAR, but you can also search directly for SAPCAR_5-10000854.EXE (remember that the part of the name following SAPCAR will differ between SAP different releases and platforms).
When I attempted to decompress KH46C36.CAR on my PC using SAPCAR_5-10000854.EXE, it worked quite happily. More importantly, it also worked for all the CAR files that were causing me problems on the AIX server.
Now, remember that I was thinking that the original problem was caused by corruption during the file transfer, either from SAP to my PC, or from my PC to the server. The logical conclusion, if that was the case, would be to restart the transfer at whichever step had corrupted the file(s). However, because it appeared that the problem may have been with the UNIX SAPCAR, I wondered whether the decompressed files created on the Windows system would work with the AIX system. As it turned, after I transferred the decompressed files from Windows to the EPS/in directory on the AIX system, I was able to import the the Support package using SPAM.
This makes sense, given that what we are working with is the source of the platform independent ABAP code. The code that ends up in the transport may look differently depending on the machine architechture (read up on little endian versus big endiann), but the contents of the transport will be the same across platforms, for the same release of SAP. On the other hand, if I wanted to upgrade AIX or DBMS specific parts of this particular installation, I would be upgrading the kernel (i.e. /sapmnt/XXX/exe for 4.6C) files, not loading my data into the system via SPAM.
More to the point, what does this get me ?
I can get the OS / DBMS independent upgrades completed, so that the testiers don’t get held up. I get this done before I get distracted by tracking down the kernel error (i.e. why the AIX SAPCAR doesn’t work). The division between SAP Application code and the Operating System / DBMS dependent code allows for some interesting ways of solving problems. Where have you used code or executables for one platform to help fix a problem on another platform ?
Some thoughts on the ‘On Premise, On Demand, On Device’ mantra which was very evident at at TechEd in Las Vegas this year.
* There was less empahasi on the iPad and iPad nano (aka iPhone), compared to the impression I had received about SAPPHIRE (despite the presence in the timetable of the session CD125 iPhone and iPad in the Enterprise). I do know that the number of Android devices on the the market has driven their prices well below those of the equivalent Apple devices, with the implication being that choosing one device type over another may make the difference in the financial viability of a large scale mobile rollout.
* Another issue was device standardisation (See presentation CD123 The Device Challenge – Selecting the Right Mobile Devices for Your Enterprise). On the one hand, designing interfaces to be device agnostic means you end up with the lowest common denominator, but on the other hand, each device type does have unique capabilities. One interesting approach with some potential is a product called Caffeine (you’ll need Code Exchange access), written and released into the public domain by an SAP employee. It enables, the execution of ABAP on new platforms, such as Java (JVM), Android (Dalvik VM), the iOS (ObjectiveC). The most obvious use case is where an ABAP programmer writes ABAP code (that runs on the device, not the server) and this code is used by device specific programs. The idea here is that the ABAP people know the business structure and logic, and this is written once, while the device specific coding is handled by device specific programmers.
On the minimalist end of the scale, my team got a bit of praise at the Innovation weekend for having a simple HTML interface that used a server based PHP program with REST APIs to communicate with an application we developed in SAP’s River cloud. This meant we could have demonstrated the product with much older technology than Androids or iPhones – an important consideration when dealing with volunteers and non-profit organisations. A much more impressive example were the 2010 Las Vegas Demo Jam Winners Matt Harding and Al Templeton (BTW, I’ms not a barbarian, I’m a Tasmanian was made about these guys) who used an HTML5 interface for data entry requiring a modern browser, but still relatively device independent.
* As an aside, Rui Nogueira gave a presentation on Code Exchange. Some people (myself included) had some issues with what we saw as onerous licensing requirements. I was able to have what was effectively a one-on-one with Rui later on in the week, and have a seperate post percolating away on that, to be posted real soon.
* The current and soon to be released features of the Adaptive Computing tools (See ALM208 Adaptive Computing Virtualization and ALM214 Virtual Reality) now let you manage the entire stack, from the physical in-house AND cloud resources, right up to starting and stopping individual SAP instances. There’s an argument that vendor specific tools may do a better job of managing these resources, but the whole point is that the resources at your disposable may not be vendor specific. I certainly got the impression that the latest release (due out in GA early 2011) provide more than enough sophistication for a site where the majority of the workload is SAP based. And the ACC tools come with the Netweaver license, no extra cost except for configuration.
* BusinessByDesign will come with an SDK (see CD107 Developing SAP Business ByDesign Applications Using Partner Development Infrastructure), supposedly available to partners only, for creating and modifying functionality. The version we got to use in the hands-on session was a bit clunky, but it was functional, and it was still a pre-release version. From my perspective, the elephant in the room is that sizing becomes even more of a black art; Architechs can estimate what queries wil be made and how often, and the impact that this will have on system load (from hardware resources to virtual server to network load to preseentation device), but this can all be blown out of the water by a developer or end user ‘having a bright idea’ It’s a reminder that the physical infrastructure needs to be supported by a new (for SAP, anyway) type of agile process, to allow for qucik but accurate provision of the resources to back up demand surges, while making sure that they are in fact real demand and not caused by an error in the application
* To me the biggest takeaway from the conference was the one phrase, especially from the SAP mentors (I know a few and have worked with a couple of them, so I may have got to go and hear a few things I possibly shouldn’t have…),
“It’s not your Grand Dad’s / Grand Ma’s SAP any more”
Whether you’re part of a System Integrator or large partner, like I am, or an independent consultant, or somewhere in between, we all need to get up to speed on what tools and techniques are available to us and our customers. While conferences like SAP TechEd provide invaluable networking opportunities, you don’t have to go…. for example, most of the SAP Teched 10 presenatations are available off the SCN e-learn page (search for the SAP TechEd 2010 link).
But there’s more (no steak knives though) …
1) ondemand.com is an SAP site which allows you free access to perform BI analytics on small sets of data (you can pay for more storage if you wish).
2) Sustainability is supported by SAP’s Carbon Impact on Demand,
3) the live Collaborative Decision Making site.
4) Don’t forget the Development versions of the latest SAP software from Crystal Reports to ABAP that you can install on your laptop, at home or in the cloud.
It also helps to keep up to date with the latest news; for example, did you know what was happeing to Web Dynpro Java ?- See The Future of SAP Java UIs – Breaking News and Customer Dialogue from SAP TechEd Las Vegas and Kiss of Death for Web Dynpro Java – The Follow-Up Questions.
Life is changingg, SAP is changing, and while there is always too much information to absorb and lots of new things clamouring for our attention, there are easy ways to keep up to date with SAP the company, SAP the product(s) and SAP the industry.
A little bit of History….
If you’ve administered, or even worked on, any release of R3 or the other ABAP powered SAP systems, you’ll be familiar with the user-ids of SAP* and DDIC. The SAP* user, in particular, is very powerful, but early releases of R3 had some flaws in how the SAP* password was stored or calculated. You created a SAP* userid, with it’s own password (encrypted and stored, just like all the other passwords) or you used the default settings (including the default password) for SAP*. The problem was that if I didn’t know the SAP* password, but could access the database (via telnet as most R3 systems were some UNIX variant back then), all I had to do was delete the SAP* user record (using SQL) and logon using the very well known defaults.
R3 is a business system, owned by the business, and us technical people have no right to go poking around where we are not wanted (OK, a bit tongue-in-cheek, but there’s more than a grain of truth in there). To help resolve this issue, somewhere around version 3.0, SAP introduced the profile parameter login/no_automatic_user_sapstar which, when set, meant you had to have an explicitly defined SAP* user record.
Of course, if you really have to login as SAP*, and you know a password from another user for the same client, you can still modify the existing SAP* user record via SQL. Changing passwords via SQL isn’t as risky as you’d think, so long as operating system access to the database is restricted. When I have done this, it’s been on behalf of the System Administrators, because they or we (ok I) forgot or lost the password, or got locked out, or someone changed the password and went home without telling anyone else.
Back to the 21st Century…
Now, this was all pre ABAP v Java (sorry, that should probably be ABAP and Java). In the dual-stack systems, the day-to-day Java equivalent of the SAP*user is the J2EE-ADMIN user, which is usually (but not always) defined in the ABAP engine. In a Java only system, it is the Administrator user, which is defined in the UME link from http://server:port/index.html. The Java engine, whether on its own or part of a dual-stack system, also has a SAP* user, but it comes with some extra properties…
1. The system is configured, by default, to not allow access via SAP* at all,
2. When the system is configured to allow SAP* to log in, no other user can login,
3. and, of course, configuration changes require a restart..
Now, if you loose or require the Administrator or J2EE-ADMIN password, you can reset them via the SAP* user; But this requires the following steps;
- Enable the SAP* logon via the Config Tool,
- Restart the Server (to allow the previous step to take effect),
- Reset the affected passwords
- Disable the SAP* logon via the Config Tool, and
- Restart the Server
Sumit Madral has very recently published a good article on how to perform the reconfiguration for SAP* on java systems so I won’t go into any more detail. It is enough to say that this requires two server restarts before you can start the work you were tasked with in the first place.
…and the whole point of the blog is …
I work for an SI which means we have a lot of systems to keep track of the user and passwords for. Many of us use simple algorithms to keep track of our passwords, such PASSWORD = ‘a phrase’ + SID + incremental-value. However, if you’ve read this far, you may have guessed that I’ve been caught out by incorrect or locked passwords a few times, including the Administrator and J2EE-ADMIN users.
When it happened once too often, I decided I needed a preventative measure. Now, on any Java systems I support, I create an Admin_Backup user, with limited authority, to be used solely for resetting / unlocking the Administrator and J2EE-ADMIN users. It is a backup mechanism; I know I’ll make mistakes, so I prepare for them.
It started with a request to bring a 46C landscape up to date. The starting levels for the Basis, ABA and R3 Support Packages were all at the low 20′s, while the target level for each of them was level 53.
This meant I needed to install about 90 support packs per instance. Comparing the sizes of the Support Packages against the space available in /usr/sap/trans suggested that I might be able to fit everything in without annoying the Storage Management team, if I was able to clean up all the old transports.
Which was where I hit the snag:
zuxdc22:dp1adm 19> tp check all pf=TP_DOMAIN_DP1.PFL
This is tp version 305.13.24 (release 46D) for ANY database
check>Log file is written to /usr/sap/trans/tmp/CHECK.LOG
check>Collected 22 filenames from [/usr/sap/trans/buffer/.]
check>Collected 5 Systemnames from [/usr/sap/trans/buffer/.]
check>Collected 00160 out of 00160 entries from buffer ZP1.
check>Collected 01233 out of 01233 entries from buffer TP1.
check>Collected 03037 out of 03189 entries from buffer PP1.
check>Collected 00094 out of 03254 entries from buffer QP1.
check>Collected 00023 out of 02671 entries from buffer DP1.
check>Collected 04547 entries from buffers
check>Collected 5082 filenames from [/usr/sap/trans/cofiles/.]
check>Found 3 invalid filenames on Cofile-directory
check>No Cofile found for TA STOPMARK
ERROR: A target system group (/U9C_ALR/) is used with a name longer than 3.
This is only possible with NBUFFORM=TRUE!
ERROR: EXIT(16) -> process ID is: 87782
tp returncode summary:
TOOLS: Highest return code of single steps was: 16
ERRORS: Highest tp internal error was: 0204
tp finished with return code: 204
parameter is missing
However, when I checked the domain profile TP_DOMAIN_DP1.PFL, the values for NBUFFORM (and a related parameter, CTC) were set correctly….
TRANSDIR = /usr/sap/trans
DP1/CTC = 1
DP1/DBHOST = zuxdc22
DP1/DBNAME = DP1
DP1/DBTYPE = db6
DP1/NBUFFORM = 1
But that’s OK – This problem (NBUFFORM and CTC are set correctly, but don’t take effect) will probably be fixed when I upgrade the kernel, which I’m going to have to do as part of the Support Pack upgrades. But I need to upgrade the kernel when I upgrade the Support Packs, and I couldn’t reliably do that until I cleaned out the transport directories. Which required an upgrade to the kernel, ….. and of course what happens if the kernel upgrade doesn’t fix the problem ? I needed another solution.
Sometimes you need more than SAP knowledge to get things going. At this point, I knew there was at least one ‘invalid’ Target System Group in the transport directories, with at least one transport using it. So I decided to find out what that transport (and any others with the same Target System Group !!) was ….
zuxdc22:dp1adm 21> cd ../cofiles
zuxdc22:dp1adm 22> pwd
zuxdc22:dp1adm 23> grep U9C_ALR *.*
K111738.DP1:HERMANNMA K /U9C_ALR/ 3 0 0 0 0 0 0 0 0 1 46C . 0 0 0 0 0 000
Remembering that the contents of the /usr/sap/trans/cofiles directory are text files (the /usr/sap/trans/data files are binary), I was able to edit the cofile for the transport in error (I used vi because this was on an AIX system).
zuxdc22:dp1adm 24> vi K111738.DP1
zuxdc22:dp1adm 22> pwd
zuxdc22:dp1adm 23> head K111738.P9C
HERMANNMA K U9C 3 0 0 0 0 0 0 0 0 1 46C . 0 0
0 0 0 000
I corrected the transport in error, and reran tp check all to see if there was anything else in error, before running tp testold or tp clearold.
This is a fairly esoteric example of where pure SAP skills won’t help with an SAP related problem. It was actually worse than I’ve described above, as my second run of tp check all highlighted a Target System Group that had 45 transports belonging to it. I fixed these, thinking if there were any more errors, I would have to find a different way to approach the problem, but they were the last errors.
Depending on the number of errors, I would also look at installing the latest copies of the tp programs and modules in a separate directory. Without having gone through it, I can’t think of any logical problems, but it would have been an interesting exercise… It may have been more time consuming, though, which also needs to be taken into consideration. For what its worth, the way to check the release level of the tp program is described in OSS Note 155350.
When have you had to go above and beyond SAP, to get the job done ? What non SAP skills do you get to use on a regular basis in your SAP work ?
In this case, the Administrative User values are the same, but the Administrative Password fields are different. Since they are using the same User Source (the ABAP engine), one of the values (or both !!) must be incorrect.
I recently came across an interesting article on SAP’s SME Solutions – A Guide to the Product Portfolio. It breaks down the four SAP products for SME products by size, functionality, industry coverage, deployment options and cost of ownership.
The most important point the post makes is that there exists a range of SMEs, and that a one-size software solution does not fit all. This leads to some further points worth noting.
The smaller the SME, the less likely they are to adopt complex technology. While there is movement to Linux and open source ERPs (because of the TCO perceptions), when they do get into technology, they tend to select Microsoft platforms (e.g. .Net, SQL Server).
Because of TCO concerns, the smaller SMEs were the first to adopt software as a service (SaaS), and that model continues to gain traction within the SME market. The implication is that any SME strategy must include a SaaS strategy.
|SAP Business Suite||The “original” suite of applications for enterprise-class customers. Includes ERP, CRM, PLM, SCM and SRM. Built on the original (and evolving) ABAP/Java platform.|
|SAP Business All-in-One||A partially “pre-configured” version of Business Suite, offering 80% configured solutions for larger SMEs in a wide range of industries.|
|SAP Business One||>A completely different product designed for smaller SMEs. Acquired in 2002 (through TopManage), the product is developed in Microsoft .Net technologies.|
|SAP Business ByDesign||A completely software as a service (SaaS) system developed by SAP and introduced in 2007. For SAP, it’s an entirely new approach to software design and deployment.|
Given that its a blog post, the article does a good job of detailing the four SAP products that resulted from the new SME Strategy, albeit at a high-level view. While it won’t answer all your questions, it will give you a good starting point, especially about costs and appropriate products, for your conversation with SAP or your implementation partner,