Why did i create a SAP PI/PO course

I have seen many developers who were trying to start using the tool, but many did not know where to start. Others couldn’t afford a PI training that could guide them through SAP PI/PO. In other cases, the developer simply missed the course – if your timing is bad, you might have to wait for months in order to enroll in another course.

If you are a new employee, it is quite difficult to wait for the start of a new course. Until then, your role at the company is on par with that of a paperweight. If you are a new developer just waiting around for a new course, you are unable to use your skills to their fullest extent, and you are basically unable to complete the tasks you were hired to do.

This course offers new developers a good foundation. They will be able to understand what components are there in the PI landscape, they will progress in their ability to develop scenarios, and they will be able to understand the projects created by others working at their company, so they can leverage their accumulated knowledge. Furthermore, understanding the work done by others will also lead them to new enhancement ideas.

As I’ve been working as an SAP consultant for approximately 11 years now, I have seen many scenarios. A lot of them were created after I taught people how to use the tool. Whenever I was leaving, I had to be sure that there was someone in the organization who could manage the scenarios and handle whatever was going on.

My consulting experience has provided me with a lot of insight and inspiration for this course. I created the course in order to help people learn and improve their skills quickly. That is my main goal.

If you want to join my SAP PI Training you can join it at the SAP PI/PO training site. On the site you can also find free tutorials that guides you to how to get started and create an end to end scenario.

SAP Process Orchestration is ready

I have been working on creating a course for PI developers so they could learn how to useProcess Orchestration/BPMN. I was missing a good tutorial to get started with BPMN so I could help my customers move to the single stack.

So I decided to create a course on the topic of BPMN and PI.  One of the things I learned most from was on interview with some of the people how have been suing BPMN for some time.  In this blog I’ll share some of the information that I got from the interviews.

  • BPMN is a beautiful tool that, we as PI developers, must understand how to use. Yes it was the word beautiful on a SAP product. Really nice. The reason is that it enables developers to draw the processes much better and is easier to understand. There is also the concept that there is the Business Rules Management (BRM) which makes some actions easier.
  • BPM is easy to get started with. It was not so difficult to use if you had the background on ccBPM. The basic building blocks are much the same and then it can do a bit more. Most experts agreed that it was a good idea to start small and with a simple process. Then you could enhance it to make sure that you covered the business. If you stared with designing the full process you would have a hard time validating it.
  • Performance is improvement is much better. So there is not the requirement to try to avoid using BPMN for all cost. With ccBPM the goal was to avoid using it because of the negative performance that it had. The people that I interview did not share this concern and thought that BPMN was a much better performing tool and the PO was a good solid platform.
  • BPMN can be eliminated in many patterns in the migration. In a lot of instances we want can avoid using BPMN when migrating. A lot of ccBPM is from old releases of XI where we often had to create collect patterns and async/sync bridges.  Well this mean that you will not end up having the same number of ccBPMs and BPMN if you do a migration. In some scenarios you may also end up creating new processes, to make the business process better supported.
  • Data structures/message types is being validated much more. In ccBPM you could put whatever message into the process. BPMN requires you to have the exact data structure, so you will have to define the data as it are. This is giving some issues if you want to have idoc data into the process. One workaround is to use CDATA structures for the data you don’t want to define.
  • Versioning can cause some challenges. The best is to use NWDI to handle the projects. NWDI did make all of the change management and version control much better. The challenge is that not all clients have NWDI. So there is the option to export the software components

You can get access to the all information on the interview at http://picourse.com/po

*) I don’t know if any of the issues has been change with the newer services packs, but this is the results of my interviews.

#SAPPHIRE youtube challenge: Meet the right people for you

I’m leaving from home in a few hours for SAPPHIRE, and really looking forward to meet all the really cool people there. But you mostly meet people at random, some will help get your objectives. But you have to be lucky.

So I thought that it could be interesting to see how was at the event that could make a difference. The ideal people for me to meet would be people working with SAP Process Integration (PI). It is difficult to find them and you have to meet a lot of people, which is nice though.

I decided to make a video saying how I wanted to meet. It would though probably be nicer if other people did the same. So if you are up for the challenge. Create a youtube video about how you want to meet. The format can be whatever you can make. If it is with your mobile phone it is also great.

Just make a video reply to my movie and tag the video with #sapphire and #sapphiremeet.

Hire is my video.


 

SAP PI XML Mappings using groovy

Creating XML mapping in Java have for me always been difficult, it has been possible but I would prefer other tools. I was looking at scripting languages like Ruby/JRuby or Groovy for creating some web apps. Those languages seem quite hot right now. On the SCN Wiki a group has implemented the Grails (groovy on Rails) on the Netweaver system, as Composition on Grails. With this tool it is possible to applications with a Webdynpro look and feel. Grails is a framework for creating webapps with less coding.

Groovy is a scripting language designed on the basis of Java. Groovy script is compiled into Java classes, and both Java and Groovy can be mixed. This makes the implementation easier, just start writing Java and when you feel like use some of the smarter features of Groovy you can use them.

While I was looking at Grails, I thought that I would be possible to use it in PI. One place could be in java mappings. I’ll describe the steps that I have taken to implement this.

  1. Download and install the groovy library
  2. Get the Groovy plugin to Eclipse, this make developing much easier.
  3. Create a new Eclipse project
  4. Insert the aii_map_api.jar in the project, to be able to implement Streamtransformation service.
  5. Create a new Groovy file in the source folder, with the name GroovyMapIdoc.groovy, then Eclipse know that it is a groovy file.
  6. Create the mapping of your file. I have attached my example code bellow.
  7. Compile the Groovy files using the context menu on the GroovyMapIdoc.groovy file.
  8. Zip the content of the bin-groovy in the project folder and upload it, as an imported archive in the Integration builder. Alternative use ant build to create the zip files.
  9. Upload the two files Groovy-1.6.1.jar and asm-2.2.3.jar as imported archives. They can be found in <GROOVY_HOME>\lib
  10. Activate and use the mapping.

I would expect people trying this to have a good knowledge of using XI or PI Java mappings, because it is a requirement for the development of mappings.

One example I always have considered, was my first challenging mapping experience. Posting financial post with more than 1000 lines to the FIDCCP02 idoc. The FIDCCP02 only accepts 999 lines. The posting can be created multiply idocs with 998 lines and the post a balance on each item. This way all documents will balance.

The document is transformed from the left document to the right. I have for this example used a max size of 3 to make testing easier.

The code that I have used for the mapping is.

package com.figaf.mapping

import com.sap.aii.mapping.api.StreamTransformation;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Map;

import groovy.xml.MarkupBuilder

class GroovyMapIdoc implements StreamTransformation{

   Map param;
   
    void setParameter(Map param) {
        this.param = param;
    }
    // Number of lines pr idoc
    def step=3
   
    /**
     * Implementation of the execution method
     */
    void execute(InputStream input, OutputStream out) {

        // Parse the input using the XMLSlurper
        def FICCP01 = new XmlSlurper().parse(input)
        // get the different lines using the GPath
        def Lines = FICCP01.IDOC.LINE
        // create a writer example
        def writer = new OutputStreamWriter(out)
       
        def xml = new MarkupBuilder(writer)
        // create the root element and fill data into it.
        xml.FICCP01(){
            // get the number of idocs to be created.
            def numIdocs =   Lines.size()/step + (Lines.size()%step>0?1:0)  
            // loop for each idoc
            for ( i in 0..numIdocs-1 ) {
                // find the limit for the current idoc
                def max = Math.min( Lines.size(), i* step+2)
                // create sum ellement to create balances
                def sum = 0.0;
                 def lineno=1;
                IDOC(){
                    // create the number segment, using GPATH
                    NR(FICCP01.IDOC.NR )
                    // for each line in the range do the following
                    Lines[i*step..max].each{oldline->
                         // create a  new Line node, in the out put element
                         // with the following content
                        LINE(){
                        NO(lineno++)
                        Text(oldline.Text.text())
                        Amount(oldline.Amount.toBigDecimal())
                    }
                   // update the sum
                  sum +=oldline.Amount.toBigDecimal()
                }    
                    // create a balancing line, with balances the result
                    LINE(){
                        NO(step+1)
                        Text('Balance')
                        Amount(-sum)
                    }
                }
            }
        }

        // write the xml to output
         writer.flush()
         writer.close()
       
    }
   
}

Behind the scenes the Groovy file is changed in to java classes. Because Java does not support Closures natively different subclasses are created. Try to have a look on them using a decompiler like jad.

Conclusion

Groovy could be a way to improve the how java mappings are created. The XML generation is easier to handle then how it would have been created in Java and it is more powerful than XSLT. It takes some effort to get use to the closures concept of Groovy and the other notation, but it seems to work real well.

I don’t think the performance issue with the mapping is a problem. There is an overhead to load the groovy libraries and the code is probably not as optimized if it was written directly in java. I have not made any measurements for this.

Use proxy to inspect http/soap requests

I have developed some PI webservice scenarios, that I needed to test. When I test webservice I use .NET WebService Studio, which I’m unable to find anymore. It is a small free application that can use a WSDL to create and interface to test the application.

When I tested this interface, I got the following error: System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly. It took a while to trace what the error was, because the processing looked fine at the PI system.

To trace the problem I deceived to place a proxy in between the .NET application and the PI SOAP adapter. I had earlier used Webscrab as a proxy to see how the used HTTP request looked and change some parameters. The new version of Webscrab is a Java Webstart application so it is much easier to get started with.

When the application is started, the user is requested to select a database to store the requests in. I normally use blank password, since the requests is just private.

The application looks like the following.

It is possible to change the port, for where the proxy listens in the menu Plugin->Proxy->Proxy Listeners. Default is 8008.

In the .NET application the proxy have to be changed to http://localhost:8008 and it should then be possible to see the incoming requests in the log.

When I placed this proxy between the two applications, I got the response that I expected and the .NET application also received the correct data.

If you need to change some requests, it is also possible. From the Plugin -> Intercept Requests and POST.

Then for each new request you should get a popup, where it is possible to change the HTTP request before it is sent to the server.

Modeling tools in PI 7.1

With PI 7.1 it is possible to create Aris Models within the Enterprise Service Builder. This will make it easier for the integration team to model the process and interfaces.It is possible to go from the high level business process models, all the way down to an interface implementation and configuration of integration scenarios. The use of models adds an extra layer of documentation of the business processes in conjunction with the Enterprise Services (ES). The models can be used to get an overview of how a scenario is created.

This paper will contain information about which models can be used in PI 7.1 and which meaning they will have for a PI developer.

The Sales Order model is based on the ES for sales orders. But the modeled system contains more information than it is possible to find on ES workplace. What is added to the models in the ESR is how the services can be connected in a specific scenario.

At first look at the models there are a lot of different models. It takes a little while to get used to the models, and figure out which models can be used for which purpose. There are 12 different model types, and all of them can be used for describing a business process in different ways and levels.

At the highest level different integration scenarios are grouped together. An example of this is the SAP Scenario Catalog, which is a grouping of different scenarios. This model makes it easier to understand how different scenarios belong together and to find the scenarios that have something in common.

An example of the scenario catalog is the following.

An Integration Scenario is a high level overview of what Deployment Units are used. In each Deployment Unit there is one or more Process Components which can contain a number of process steps. The connection between the Deployment Units can be linked to information about the integration scenarios.

The interaction between the Process Components can be described in a ProComp Interaction Model.

A ProComp Interaction model shows how different Process Components relate to each other, for instance the message flow between Process Components. An example of this is showed below.

The ProComp Interaction model can contain information about what Service Interfaces are used and the mappings they contain. This information can be used to configure an Integration Scenario by adding information about business systems and where the different process components are installed – and by selecting adapters. Then it works just the same way as an Integration Scenario in PI 7.0.

The ProComp model can also be used to describe how the flow is within a Process Component. This type of model seems to be more useful if the aim is to document how Enterprise Services are connected within a Process Component. An example of what the ProComp model could be used for: To describe what is going on in a BPM (Integration Process in PI) which can then later be created based on the model.

It is still possible to make use of integration scenarios from XI 3.0/PI 7.0. These scenarios do not explain the business in the same detail as some of the other model types. They do, however, provide information about which connections are used and how the messages are mapped. The integration scenarios are easier to understand for a PI developer since they give information about which connections are used in a direct fashion and because they have been used earlier versions of XI/PI.

It takes a little while to get used to working with the models in PI 7.1 and to create models which can be used and understood by developers and Business Process eXperts ( on SDN SAP has a BPX community).

The use of models does seem to create some extra overhead compared to a top-down approach which starts with an Integration Scenario and the objects are created to fit into the process. To be able to make such a scenario one would normally create a drawing to describe what is going on and to support development of the scenarios. This drawing is often a process diagram, for instance in Visio, PowerPoint or on paper. With help of the built-in model tools it is now possible to store such models within the ES Builder, thus serving the purpose of documenting the process and context to which the interface belongs.

I recommend to invest time in establishing naming conventions for modeling and guidelines for when and how modeling should be used.

A question which has to be answered is if models should be created for all integration scenarios – or only when Enterprise Services are involved? I probably need to use modeling in real projects and then evaluate if the use of modeling makes sense.

Publishing services in PI 7.1

With PI 7.1 is it possible to publish services to the Service Registry (SR) directly from the Enterprise Service (ES) Builder and Integration Directory. The publishing functionality will allow developers and Business Process Experts (BPXs) to publish the services interfaces to the SR or UDDI. The UDDI will then contain a global list of all the services, which is implemented or at some point will be implemented.

To show the different ways to define services, it is necessary to see how they can be published. Services can be provided the following ways.

  • Brokered Service implemented in own system. A service provided by your company is exposed as a web service in your PI system (a brokered service) and the endpoint is made available via the SR.
  • Brokered service to be implemented in partner system. A new interface must be developed. It will be implemented as a web service provided by a partner system. You want to offer this service in your own SR. You define the interface in your ES Builder and create a WSDL which the partner will use to develop and implement the service. When the service is deployed the endpoint can be posted in your SR.
  • Web service provided by 3rd party. Someone has developed a webservice. The WSDL and endpoint can be published in your SR thus making the service available to users (developers) of your SR.

How can the ES Builder and SR be used to support the three different options ? This will be described in the sections below.

1 Brokered Service implemented in own system

In PI 7.0 the only way to expose brokered services was to generate a WSDL and in the process enter a lot of information regarding the URL and service information. The WSDL file could then be saved as a file and mailed to the developers who wanted to use the service. If the file should be exposed via an UDDI the WSDL had to be placed on an HTTP server and then published.

This process has improved a lot with PI 7.1. Publishing of web services to the SR can now be performed with a few mouse clicks.

The Service Interface is defined the normal way as an outbound interface.

To configure the outbound service a Sender Agreement and a SOAP communication channel will have to be created. The Sender Agreement should then be configured to the communication channel.

To publish the service select Publish in SR from the menu.

It is possible to change the URLs to fit with external naming conventions.

When I tried to publish the service I got an error. It was like something is missing to complete the publication. The Service has been published in the SR but without an endpoint. Either it is a configuration we are missing or a bug that hopefully will be corrected in next service pack.

The service is published in the SR with the following information.

2 Brokered service to be implemented in partner system

You and a partner agree on a new interface where you need to call a service to be implemented in the partner’s system. The interface is first designed in your PI system. A proxy can be implemented with the help of SPROXY (ABAP) or you can generate a Java proxy interface. This works on SAP systems, but it does not work as seamless with non-SAP products. To share the interfaces in PI 7.0 the PI developer had to export the WSDL files and mail them to the partner.

This is a lot easier with PI 7.1. In an inbound interface there is a publish button on the WSDL tab. This will allow for direct publishing to the Service Registry.

And what is really nice is the WSDL is also published in a way, which will allow developers to get access to it, directly or from the UDDI.

When the developers have completed the service, they can publish the service in the SR with an endpoint. What seems to be missing is a way to configure the PI communication channels to retrieve the endpoint information from SR. This would be a nice feature, which would make it possible to be able to change the endpoint without having to change the communication channel.

3 Web service provided by 3rd party

A WSDL of a 3rd party web service can be published in your SR from the Publish page. Your developers can then browse through delivered WSDLs in the SR and make use of (implementing calls to) the services.

Publishing can happen quite fast by entering the URL for the WSDL and then selecting Publish.

If one of the services has to be consumed by a PI scenario, there is missing a link from the SR to the ES Builder. It is not possible to import a WSDL directly from the SR or with the URL. The WSDL and XSD must be saved as files and then imported using the mass import tool.

The process for importing multiple WSDLs into external definitions in ESR is as follows. First select where the external definitions should be stored.

Then select the files to import.

Then confirm the type

After verifying the types and links, the schemas are imported as external definitions. After importing the WSDL, links between the different components are still valid. I do not know if this also works if there is HTTP links to the WSDLs.

Conclusion

With the new version of PI 7.1 the publishing functionality is increased a lot, to make it easier for developers to share their work. The functionality does make it easy to publish services and therefore it will be something there is more likely to be used. The only feature that seems to be missing is a way to import WSDLs directly from a HTTP host or from the SR.