SAP Process Orchestration is ready

I have been working on creating a course for PI developers so they could learn how to useProcess Orchestration/BPMN. I was missing a good tutorial to get started with BPMN so I could help my customers move to the single stack.

So I decided to create a course on the topic of BPMN and PI.  One of the things I learned most from was on interview with some of the people how have been suing BPMN for some time.  In this blog I’ll share some of the information that I got from the interviews.

  • BPMN is a beautiful tool that, we as PI developers, must understand how to use. Yes it was the word beautiful on a SAP product. Really nice. The reason is that it enables developers to draw the processes much better and is easier to understand. There is also the concept that there is the Business Rules Management (BRM) which makes some actions easier.
  • BPM is easy to get started with. It was not so difficult to use if you had the background on ccBPM. The basic building blocks are much the same and then it can do a bit more. Most experts agreed that it was a good idea to start small and with a simple process. Then you could enhance it to make sure that you covered the business. If you stared with designing the full process you would have a hard time validating it.
  • Performance is improvement is much better. So there is not the requirement to try to avoid using BPMN for all cost. With ccBPM the goal was to avoid using it because of the negative performance that it had. The people that I interview did not share this concern and thought that BPMN was a much better performing tool and the PO was a good solid platform.
  • BPMN can be eliminated in many patterns in the migration. In a lot of instances we want can avoid using BPMN when migrating. A lot of ccBPM is from old releases of XI where we often had to create collect patterns and async/sync bridges.  Well this mean that you will not end up having the same number of ccBPMs and BPMN if you do a migration. In some scenarios you may also end up creating new processes, to make the business process better supported.
  • Data structures/message types is being validated much more. In ccBPM you could put whatever message into the process. BPMN requires you to have the exact data structure, so you will have to define the data as it are. This is giving some issues if you want to have idoc data into the process. One workaround is to use CDATA structures for the data you don’t want to define.
  • Versioning can cause some challenges. The best is to use NWDI to handle the projects. NWDI did make all of the change management and version control much better. The challenge is that not all clients have NWDI. So there is the option to export the software components

You can get access to the all information on the interview at

*) I don’t know if any of the issues has been change with the newer services packs, but this is the results of my interviews.

Book of the week: SAP Netweaver PI – Development 2nd

This week’s book a rather technical book about how to develop in SAP PI.

The book is SAP NetWeaver PI Development: Practical Guide by Valentin Nicolescu et al


I wanted to talk some more about the book, so I have created a video that you can see here.

Using SFTP for windows in PI

Last December I wrote about how to use SFTP SSH from a Unix server, without using a Seeburger Adapter. SAP PI/XI cannot be used to access SSH SFTP sites directly, but this script can help. There have been many requests for a version which also works on windows. I do not have access to a windows server with PI or XI so it is a little difficult to test for me.

I have now written the following script, which works on my Vista labtop. I have not tested it on Windows 2003 or Windows 2008, where most PI systems will run.

I have used Putty for creating the SSH connection. I have used the pscp (an SCP client, i.e. command-line secure file copy). To try something different then using SFTP. SCP makes it easier to get files which should exist in a directory. Pscp should be downloaded and saved in the same directory as the script.     

The script looks like the following.

@echo off
REM %1 target file  PI %F
REM %2 query for where the file is located ie.*
REM %3 SSH Password

REM RESET the target file
echo '' >%1

SET TARGETDIR=%~d1%~p1download

   mkdir  %TARGETDIR%
del /Q %TARGETDIR%\*

pscp.exe -pw %3 %2 %TARGETDIR%

type  %TARGETDIR%\* > %1

The script takes the following parameters.

  1. The %F which is the name of the file, which the adapter is currently reading.
  2. A the location of the file on the server side in the form “user@server:path/[filter*] ie.*. This command logon with the user root on the host Then looks in the directory dir, relative to the login dir and the selects all files starting with SKB.
  3. The users password.

The command in the communication channel should look something like.

C:\scripts\sshftp.bat %F*.

I have only tested with password, but pscp might also work with using ssh keys.

SAP PI XML Mappings using groovy

Creating XML mapping in Java have for me always been difficult, it has been possible but I would prefer other tools. I was looking at scripting languages like Ruby/JRuby or Groovy for creating some web apps. Those languages seem quite hot right now. On the SCN Wiki a group has implemented the Grails (groovy on Rails) on the Netweaver system, as Composition on Grails. With this tool it is possible to applications with a Webdynpro look and feel. Grails is a framework for creating webapps with less coding.

Groovy is a scripting language designed on the basis of Java. Groovy script is compiled into Java classes, and both Java and Groovy can be mixed. This makes the implementation easier, just start writing Java and when you feel like use some of the smarter features of Groovy you can use them.

While I was looking at Grails, I thought that I would be possible to use it in PI. One place could be in java mappings. I’ll describe the steps that I have taken to implement this.

  1. Download and install the groovy library
  2. Get the Groovy plugin to Eclipse, this make developing much easier.
  3. Create a new Eclipse project
  4. Insert the aii_map_api.jar in the project, to be able to implement Streamtransformation service.
  5. Create a new Groovy file in the source folder, with the name GroovyMapIdoc.groovy, then Eclipse know that it is a groovy file.
  6. Create the mapping of your file. I have attached my example code bellow.
  7. Compile the Groovy files using the context menu on the GroovyMapIdoc.groovy file.
  8. Zip the content of the bin-groovy in the project folder and upload it, as an imported archive in the Integration builder. Alternative use ant build to create the zip files.
  9. Upload the two files Groovy-1.6.1.jar and asm-2.2.3.jar as imported archives. They can be found in <GROOVY_HOME>\lib
  10. Activate and use the mapping.

I would expect people trying this to have a good knowledge of using XI or PI Java mappings, because it is a requirement for the development of mappings.

One example I always have considered, was my first challenging mapping experience. Posting financial post with more than 1000 lines to the FIDCCP02 idoc. The FIDCCP02 only accepts 999 lines. The posting can be created multiply idocs with 998 lines and the post a balance on each item. This way all documents will balance.

The document is transformed from the left document to the right. I have for this example used a max size of 3 to make testing easier.

The code that I have used for the mapping is.

package com.figaf.mapping

import java.util.Map;

import groovy.xml.MarkupBuilder

class GroovyMapIdoc implements StreamTransformation{

   Map param;
    void setParameter(Map param) {
        this.param = param;
    // Number of lines pr idoc
    def step=3
     * Implementation of the execution method
    void execute(InputStream input, OutputStream out) {

        // Parse the input using the XMLSlurper
        def FICCP01 = new XmlSlurper().parse(input)
        // get the different lines using the GPath
        def Lines = FICCP01.IDOC.LINE
        // create a writer example
        def writer = new OutputStreamWriter(out)
        def xml = new MarkupBuilder(writer)
        // create the root element and fill data into it.
            // get the number of idocs to be created.
            def numIdocs =   Lines.size()/step + (Lines.size()%step>0?1:0)  
            // loop for each idoc
            for ( i in 0..numIdocs-1 ) {
                // find the limit for the current idoc
                def max = Math.min( Lines.size(), i* step+2)
                // create sum ellement to create balances
                def sum = 0.0;
                 def lineno=1;
                    // create the number segment, using GPATH
                    NR(FICCP01.IDOC.NR )
                    // for each line in the range do the following
                         // create a  new Line node, in the out put element
                         // with the following content
                   // update the sum
                  sum +=oldline.Amount.toBigDecimal()
                    // create a balancing line, with balances the result

        // write the xml to output

Behind the scenes the Groovy file is changed in to java classes. Because Java does not support Closures natively different subclasses are created. Try to have a look on them using a decompiler like jad.


Groovy could be a way to improve the how java mappings are created. The XML generation is easier to handle then how it would have been created in Java and it is more powerful than XSLT. It takes some effort to get use to the closures concept of Groovy and the other notation, but it seems to work real well.

I don’t think the performance issue with the mapping is a problem. There is an overhead to load the groovy libraries and the code is probably not as optimized if it was written directly in java. I have not made any measurements for this.

The future of integration consulting

For the last couple of years the job considering integrating legacy systems with the SAP ERP system has stayed the same. Each system is unique, so integration had to start from scratch. There could be some limited reuse between integrations, but they did not have a large impact.

Many of companies are going to use SaaS (at least that is what many company is betting on). SaaS strategies could be seen as a way to get a best-of-breed application. Some of the SaaS applications are probably going to replace old legacy applications or reduce the number of more legacy applications being created. Integration between SAP and SaaS applications are wherefore going to play a large role in the integration work done with PI.

My experience with integration with a systems, is that it takes an average of 10 days pr system (The development could be completed within 2 days, but support during testing and bug tracking is required). If the complexity is getting larger or requires the use on new services in SAP, the number of days will skyrocket. Large parts of the integration work are concern error handling, when some unexpected data is received. The price for 10 day integration is at least €10.000, but with a high uncertainty. Such start prices will many business cases fail in adopting a SaaS application, because of the initial investment.

One way to lower this price is if the SaaS vendors also use the same Enterprise Services that SAP is exposing. Then integration will just be to connect the two interfaces and will just require testing for the business functionality. From a customer perspective this would be the ideal situation, and lower the cost of integration.

If there are no Enterprise Services to cover the SaaS integration, two things should happen; Either an Enterprise Service is created or a PI integration is created. The two options can be produced by a consulting partner and then shared under some license. Open source could be an option if it was supported by the SaaS provider.

Creating reusable integration parts can be difficult, especially in small markets like Denmark. The Danish government was very keen on electronic invoices(OIOXML), using the UBL standard. That led to a race for consultancies to create templates for integrations. I do not know any company, how got a large enough part of the integrations, to have justified a large upfront investment a shrink wrapped solution.

Reuse is possible for some domains. I have been involved in project, where reuse of BPM functions lead to easy integration with the 2th and 3rd application. In this project the Enterprise Service was implemented as a ccBPM in PI. The later integrations needed some adjustments to support the new functions and integration protocols, but the overall framework proved successful.

We asked the vendors of the third party application, if they had any integration to SAP. They all did have some integrations, but for different versions of SAP and application components. Challenges like this are going to continue because of the flexibility in SAP. In some cases can different modules handle the same functions. I doubt an Enterprise Service should handle the functionality in two different modules.

With SaaS application is the marked larger since it is global and allow for packaged solutions. The packaged solutions can both be PI mappings and BPM’s or Enterprise Services and sold via Ecohub or from the SaaS vendor.

A way a business model for this could be to give a lower rate on the first implementation; then sell this integration to other customers. It will take a few iterations before a shrink wrapped integration can be released.

To have an integration for SAP must be a really interesting for SaaS providers, because it will make it easier for customers to start using their services. If the customer is going to spend a large number of days on integrating the application, it is less likely to happen. Without an integration to the ERP system, some of the benefits are removed from the application.

For consultants will it also be interesting, if you only have to produce something once and then be able to sell it multiply times. The issue is then how the global marked can be supported, but remote consulting via VPN can probably solve most of the issues.

I’m looking forward to see if, this is going to be the business model for future PI consultants.

PI documenter services has been upgraded to include User defined functions

The first release of the open source version of the documenter library did not have support for the newest PHPExcel. It was therefore not deployed as a service on the figaf homepage. I have now upgraded the service to support the newest release. The functionality is still free and works the same way.

I finally got time to work with the code and fund the bug, which did not allow for the newest PHPExcel library. The problem was that I had to much reuse of functions. When I created a difference document, I created two documentation documents. The mapping information was then extracted from the two documents and compared to create the difference document. I created a new Excel serializer to write the different documents, a problem occurred.

I also go time to a feature, which I have been requested. That was to get user defined functions in the documentation. A more needed function was to include the UDF in the difference functions, so it is possible to see which user defined function has changed.

For the documentation I have decided not to include comments. I believe comments for this instead should be included in the source code. Is this a plausible way to look at things, or should I reconsider the design?

I have not had access for a PI 7.1 system. If you are running 7.1 would you test my service and see if it works. If it is not working would you send the XIM file and I will create a new version, which allows for PI 7.1 User defined mappings.

Do you have any other request for functionality, then comment in this blog or in the google defect log.

Release of PI documenter tools as open source

I released the two services for PI to get documentation of mappings and to compare two mappings to see what have changed. The release to those services can be found in the two

After the release people have been asked if it was possible to run the tools at the users own computer instead of the server version. And I have been thinking about releasing the scripts as commercial products to accommodate this request. It is possible to create compiled PHP code, which can be executed at local PC’s. I did decide against it because it would require a product to be more mature then the product is and did not believe the marked was large enough and marketing would be difficult. By releasing the product as open source it might be possible to get other to contribute and create a service, which could help many more.

The product was original designed to run on a server in a controlled environment. But with this version I had to refactor some of the code to support the usage a client perspective. A problem is that script is extracting a lot of files from the XIM files. These files need to be cleaned up and I used the quick solution to use the windows Delete command to do this clean up. There are some other places which have not been cleaned up yet, they will be cleaned up in the next release.

One of the challenges with releasing software as open source is that it exposes ones coding capabilities. I would say that there is a long way to go before I’m able to get a living from coding PHP. The code seems to work, but the refactoring is a little more time consuming.

You can find the code and installation guides at

If you want to help with improving the code, please join the group and help improve the product.

Free development time

My current PI consulting contract is about to finish. In the beginning it was scary, that I did not have anything to work on and did not bring any money home. Since I know that I did not have for ever to work on the job, Figaf has saved money. So there should be a paycheck for some months now. After I have gotten use to I’m starting to feel better about it. This is what I have been planning on since I started as a contractor. So now I need to enjoy the time and see how much I can develop and what I can learn.

I have a ton of ideas that I need to try out.

Thoughts on versioning of PI components

This blog describes some of the thoughts that I have on using versions for different software component versions for development and support tracks.


The use of versions as a way for managing releases has been used for a long time to maintain programs. With the help of a revision control program is it possible to get an overview of what releases are build and which changes have been made. This will make it easier to give a better understanding of what is promoted to production.

On my current PI project we have different phases. We have just gone live with the first part and is currently developing the second part. After each phase a release is tested and moved to production. While the second phase is being developed the first phase must be supported, to ensure corrections can be moved into production before the next release.

PI makes it possible to use different versions of software components, to make it possible to use each version. Like other versioning tools is it a challenge to use it correct. I’ll describe, how we have used it and what we have learned.

How to use versions

The main part of configuring different versions is fairly easy. It just requires the users to create a new software component with a new version identifier. The first release will have version 1.0 and the second version will have version 2.0 and otherwise will be placed in the same products. Remember to add dependencies to the version 2.0 of the interface components, assuming the interface components also are upgrade. The version 2.0 should also be installed on the same systems as the version 1.0 components, otherwise it could cause problems.

Create the namespaces which will be used in version 2.0. Then make a release transfer, where the objects from version 1.0 are copied to version 2.0. The Release Transfer can be found at Tools Menu, and it works much like the export function. Only content from version 1.0 is copied if a similarly namespace is in the version 2.0 component. You have now copied the content to the version 2.0 and can make changes in each component separate.

With the copy all object is copied to version 2.0. Objects will continue use the depended objects from version 1.0. For instance will a Message Mapping still use the Messages from the version 1.0. Unless the Message has been imported via dependent object, then the Mapping will use the Message from version 2.0 of the dependent software component.

With this upgrade maneuver the objects, will still be the same with very little change to them. It is a problem in the beginning but after getting use to it seems like a good idea. When changes are required simply alter the scenario to use a new interface mapping and maybe actions, and then add the functionality and the functionality can be used.

We tried to copy objects which had imported messages from the imported components. We therefore had the message types from version 2.0 in our 2.0 mappings. I do not think that this was a smart move, since I more like the idea of having to select when to upgrade a message type. Therefore this trick only works for abstract mappings, which have to be imported via the imported components. .

Objects and scenarios

I have earlier written about using scenarios for documentation and dialog tool for communicate the process with the business. With the help of scenarios, it is easier to maintain which version will be used.

When scenarios are copied to version 2.0 it will still point to version 1.0 mappings and actions. When changes are made to the version 2.0 object the scenario must be change to reflect that the mapping is changed. It is thereby possible to make configuration on a system, while maintain everything from version 1.0 except the mapping which has to be changed.


The use of two versions can cause problems. When a support issue arrives, it must be correct at version 1.0. But this change is not maintained in version 2.0. It is therefore necessary to somehow maintain both versions. If this is not done the problem will exist again in when version 2.0 is deployed. This can be difficult because it requires the users to implement the changes in version 2.0, and it gets more complicated if the involved object has been altered for version 2.0. If objects have not been altered in version 2.0, release transfer is possible again. If the object has been changed in version 2.0, the release transfer will show that conflicts exist.

To avoid having to develop thing twice and make sure that, we make the same changes to version 2.0, we made some changes to the process.

  • First we have decided that some processes will not be changed in face 2. Those object have been change so we use the version 2.0 in production. This will remove the need to maintain those objects in both versions.
  • Secondly we have decided on some systems which will be sent into production and maintain in version 1.0. This system shared BPMs with other systems, which will be changed in version 2.0 to support new features.

Namespaces and versioning    

The use of namespaces could make scenes for some areas. Then the version number could be a part of the namespace. It would make it clear if an object contained content from a different version. If this approach was use then it will not be possible to use release transfer, because the namespace differed. I think that this probably makes most sense to use when communicating with third parties and the WSDLs need to be shared and agreed on which versions are used.



Since we need to be able to support the current running production system and creating we have to have to lines of ERP systems. I believe that this is a common setup for ERP projects. For the PI development it is just to configure the correct scenarios version for the correct ERP and third party systems. It hereby seems possible to perform the support alongside development of phase 2.


The use of versions is pain and requires developers to check what they are doing. I have avoided the use of versions for 4 years now, but have finally agreed to use version. The main argument was, that project had deliverables in two stages. I believe that it is correct to use versions, but it still requires caution since it is easy to break the setup.