Marketing vs. Real life – Five minute photobook

 

Surely, I can click on a webshop and purchase a photo book in just 5 minutes but then I also need to select pictures and maybe even add some text to the pictures. This is not a 5 minutes task that’s for sure. Depending how much effort you want to put into this it can take several hours, at the moment I have spent 5 hours on my photo calendar.

So why am I raising this topic? Well, I must come clean and admit that I’m kinda doing the same thing – and it bugs me.

At Figaf we have our SAP PI/PO testing tool IRT and I can set up a test in our own system in just around 5 minutes maybe even shorter. But this is not a real-life scenario when you apply our test tool it will take you a lot longer than 5 minutes.

It can be challenging not to go down the road where you make advertises that states you can make complex testing in just 5 minutes. I have tried to be more realistic and made a backpack where it says “test your full SAP PI/PRO system in days, not months”.

I do believe that this a valid assumption and at Figaf we want to be on the same side and make sure the testing is set-up correctly. But I’m also aware that in some cases it will take longer it could be some weeks and not days.

So how do you see this marketing vs. real life?

Is it okay to say, “set-up a test in 5-minutes” because I can actually show you it can be done or is that a no-go in your opinion?

Migration of SAP PI to PO/PRO

SAP Migrations are never easy. This week I created a long post about how it is possible to make a good SAP PI to PO migration.
The purpose is to go from a dual stack to a single stack.

The reason you want to migrate is that it will make your SAP landscape much simpler. The java only instance is easier to maintain because there is just one server to run.

Performance is also better because of the java only, so there everything can happen in the same transaction and be processed fast. In the dual-stack you needed to move messages between the ABAP and JAVA stack up to 4 times per message, so there is room for improvement.

And then the big point is that you will be able to get maintenance of the system. SAP has reported that no more development will happen for the dual stack systems.

Read more about how to migrate from SAP dualstack XI/PI to a PO single stack system here.

SAP Process Orchestration is ready

I have been working on creating a course for PI developers so they could learn how to useProcess Orchestration/BPMN. I was missing a good tutorial to get started with BPMN so I could help my customers move to the single stack.

So I decided to create a course on the topic of BPMN and PI.  One of the things I learned most from was on interview with some of the people how have been suing BPMN for some time.  In this blog I’ll share some of the information that I got from the interviews.

  • BPMN is a beautiful tool that, we as PI developers, must understand how to use. Yes it was the word beautiful on a SAP product. Really nice. The reason is that it enables developers to draw the processes much better and is easier to understand. There is also the concept that there is the Business Rules Management (BRM) which makes some actions easier.
  • BPM is easy to get started with. It was not so difficult to use if you had the background on ccBPM. The basic building blocks are much the same and then it can do a bit more. Most experts agreed that it was a good idea to start small and with a simple process. Then you could enhance it to make sure that you covered the business. If you stared with designing the full process you would have a hard time validating it.
  • Performance is improvement is much better. So there is not the requirement to try to avoid using BPMN for all cost. With ccBPM the goal was to avoid using it because of the negative performance that it had. The people that I interview did not share this concern and thought that BPMN was a much better performing tool and the PO was a good solid platform.
  • BPMN can be eliminated in many patterns in the migration. In a lot of instances we want can avoid using BPMN when migrating. A lot of ccBPM is from old releases of XI where we often had to create collect patterns and async/sync bridges.  Well this mean that you will not end up having the same number of ccBPMs and BPMN if you do a migration. In some scenarios you may also end up creating new processes, to make the business process better supported.
  • Data structures/message types is being validated much more. In ccBPM you could put whatever message into the process. BPMN requires you to have the exact data structure, so you will have to define the data as it are. This is giving some issues if you want to have idoc data into the process. One workaround is to use CDATA structures for the data you don’t want to define.
  • Versioning can cause some challenges. The best is to use NWDI to handle the projects. NWDI did make all of the change management and version control much better. The challenge is that not all clients have NWDI. So there is the option to export the software components

You can get access to the all information on the interview at http://picourse.com/po

*) I don’t know if any of the issues has been change with the newer services packs, but this is the results of my interviews.

Book review: The SAP Consultant Handbook

I have been reading “The SAP Consultant Handbook” by Jon Reed and Michael Doane on my flight home from San Diego. It is a nice short book about how it is to be a part of the SAP eco system.

It is always nice to be reconfirmed in some of the ideas you have, of what you see the business you are in. The book is old from 1999 and updated in 2004. A lot of things have happened in the SAP field over the last years.

The book goes into details with all the different aspect of how you can approach the SAP marked either as a consultant or as an end user. The premise for those has not changed over the years. So it is the same type of decisions you will have to make today. In that perspective I think it is a killer book for everybody how wants to move into the SAP field.

There is a nice way of balancing what to do in some situations, should you take a higher paying job to work with older systems or getting the job that makes you travel.

One issue I have been seeing is that people may not have the long term for what they want to do in the business. They just have a job and like or don’t like it. The long term vision is important when you are selecting the job and when you have to select if you want to move.

One interesting thing the is

The Catch-22 of SAP: there are not enough trained consultants because those who have training are consulting and none of them are teaching.

It makes it really hard for new people to enter the space and make sure that companies are getting more qualified jobs.

One of the reasons is that it is either a job as a trainer or as an implementer. This means that you do not get the information for people how really do the implementation. It would be nice if it was possible for some of the top implementers/specialists to be able to deliver content they know.

 

 

#SAPPHIRE youtube challenge: Meet the right people for you

I’m leaving from home in a few hours for SAPPHIRE, and really looking forward to meet all the really cool people there. But you mostly meet people at random, some will help get your objectives. But you have to be lucky.

So I thought that it could be interesting to see how was at the event that could make a difference. The ideal people for me to meet would be people working with SAP Process Integration (PI). It is difficult to find them and you have to meet a lot of people, which is nice though.

I decided to make a video saying how I wanted to meet. It would though probably be nicer if other people did the same. So if you are up for the challenge. Create a youtube video about how you want to meet. The format can be whatever you can make. If it is with your mobile phone it is also great.

Just make a video reply to my movie and tag the video with #sapphire and #sapphiremeet.

Hire is my video.


 

SAP Enterprise Service and Google Wave

I would expect that you know what Google Wave is; otherwise you are not viewing this post. If not please look at Starting on Google Wave, to get more information on the future of communication.

For a while I have been working on how to create Google Wave applications and how they can interact with enterprise application. It is an interesting area with a lot of integration work possible and productivity enhancements.

From what I have learned so fair the features of Wave can connect very well with enterprise applications and make an environment where the users can work.

I created a demo on how Google Wave can be used to manage a simple workflow application. The demo features a loan application, with possible collaboration between the customer and the bank. The demo is made without any backend integration, but it is possible to make this integration.

A more detailed description of how this works can be found at the blog.

So it is possible to create simple “workflows”. Hardcore ABAP workflow programmers will laugh of this type of workflow, since they are missing a lot of the functionality they are using. With time and implementation of a workflow framework, creating workflows will be easier. Wave workflows will be have the advantage of being a much better tool for collaborating and contain semi structured data.

With a lot of Enterprise Services on the SAP systems, it is fairly easy to find the services that you need to enable you to implement ERP functionality in third party applications. You browse the ESR to find the webservice, which you want to use. Then you create a proxy for the service and can call this from your application, in this instance Google Wave. And then when the bank employee has approved the loan the SAP system is informed about the change.

I did have some problems with calling the ES from the Google Appengine, where it is only possible run robots currently. On Appengine there are some limitations on what you can call of Java classes, and I did not manage to get Axis working with ES. Therefore I have just used Plain HTTP calls to ES, where I have created the SOAP envelope manually. Then parsing is also done using XML parsers. From an architectural point I’m not real proved of the solution, but for the demo it is ok.

An example of how it is possible to use Google Wave in SAP is showed in the following video. In this video the user enters a command. The robot then responds with a list of orders this customer have.

The integration still has some short comings, which needs to be addressed, before Wave can be used in organizations.

The list of orders is displayed as plain text, so it is not possible to interact with the text. It would be possible to add buttons or check boxes to perform some simple processing of the orders.

There is a need for a more clear communication with the robot. Using a command like the one used in this example, is not useable. Controlling of the robot can performed using a form like in the workflow demo.

When do you get rid of old text data? Using a SAP CE and BPM it is possible to design dashboards which just shows the data a user needs to see. This is not possible in Wave. It is possible to delete Blips (the boxes with text), but it might be difficult to know when the data is not useful anymore. The user can always scroll back with the replay function.

I will be sharing some more of my examples with you as I get further in my research with Google Wave and SAP. You can follow the progress at http://masteringwave.com.

SAP and Tibco?

Apparently it is time for a lot of mergers and acquisitions with VmWare/Springsource and Facebook/friendfeed. Brian Sommer has found the Reuters story about SAP are considering to by Tibco. Apparently they have tried to get hitched before, but without a final conclusion.

If such a fusion happed it could resemble the merger between Oracle and BEA in January 2008. It also was two companies with the same product suite for the middle layer. Oracle then had select which of the products future development should happen on. For the integration part the BEA product was selected, so all consultants had to learn the new product.

SAP did a similarly integration with Business Objects, which from the side as given a lot of new products or rebranded products. For someone who has not worked with BI the integration of two products happened limited problems. From what I have heard it was the frontend from Business Objects and the backend from SAP BI, which was developed as a product, so a new best of breed application was created.

But Tibco has more than just one area of products they have complementary product whole Netweaver suite. For the integration/SOA part I have heard that Tibco should have some better products in some areas and the portal probably does not match how SAP has created the portal. Come to think of the J2ee stack it was also purchased from a company that I have forgotten. But that was on a time when SAP did not have the java stack.

Dennis Howlett says that SAP have to watch Software AG. And that Tibco’s Silver cloud platform could profit SAP.

If the acquisition is approved, then it would be interesting to see what that mean for customers, SAP and partners.

Do you have any Tibco product which you would like to have instead of the equivalent SAP products?

 

 

Using SFTP for windows in PI

Last December I wrote about how to use SFTP SSH from a Unix server, without using a Seeburger Adapter. SAP PI/XI cannot be used to access SSH SFTP sites directly, but this script can help. There have been many requests for a version which also works on windows. I do not have access to a windows server with PI or XI so it is a little difficult to test for me.

I have now written the following script, which works on my Vista labtop. I have not tested it on Windows 2003 or Windows 2008, where most PI systems will run.

I have used Putty for creating the SSH connection. I have used the pscp (an SCP client, i.e. command-line secure file copy). To try something different then using SFTP. SCP makes it easier to get files which should exist in a directory. Pscp should be downloaded and saved in the same directory as the script.     

The script looks like the following.

@echo off
REM PARAMETERS
REM %1 target file  PI %F
REM %2 query for where the file is located ie. root@figaf.com:dir/*
REM %3 SSH Password

REM RESET the target file
echo '' >%1


SET TARGETDIR=%~d1%~p1download
echo %TARGETDIR%

IF EXIST %TARGETDIR% GOTO SKIPCREATEDIR
   mkdir  %TARGETDIR%
:SKIPCREATEDIR
del /Q %TARGETDIR%\*


pscp.exe -pw %3 %2 %TARGETDIR%

type  %TARGETDIR%\* > %1

The script takes the following parameters.

  1. The %F which is the name of the file, which the adapter is currently reading.
  2. A the location of the file on the server side in the form “user@server:path/[filter*] ie. root@sftp.figaf.com:dir/SKB*. This command logon with the user root on the host sftp.figaf.com. Then looks in the directory dir, relative to the login dir and the selects all files starting with SKB.
  3. The users password.

The command in the communication channel should look something like.

C:\scripts\sshftp.bat %F root@sftp.figaf.com:dir/SKB*.

I have only tested with password, but pscp might also work with using ssh keys.

Search on SAP SDN/SCN with Ubiquity

Last year I wrote about the SCN search plugin to Ubiquity, which made it easy to search on SCN. Ubiquity is a plugin for Firefox. The problem with writing such utilities is that they need to be updated when the unpublished API changes.

The SCN search has changed to a quite fancy web 2.0 search. It is now possible to filter the results based on type, and then subtype and other parameters. The old search just had the option of choosing between few options, which all were accessible via a GET parameter. The new search uses, as fair as I can see, Google Web Toolkit as javascript framework. It looks quite difficult to interact with this framework, so the new search script just take you to the search page.

Michael Koegel informed me that a new version of Ubiquity has been released. Version 0.5 can be installed here. This new version had some changes in the way scripts was written, so the script had to be changed.

The new script can be installed from Githup.

When the script is install, and the access keys (CTRL + SPACE standard) are pressed the following popup is created. First scn-search is typed and then the query phase is called. When Enter is pressed the query will be performed.

SAP PI XML Mappings using groovy

Creating XML mapping in Java have for me always been difficult, it has been possible but I would prefer other tools. I was looking at scripting languages like Ruby/JRuby or Groovy for creating some web apps. Those languages seem quite hot right now. On the SCN Wiki a group has implemented the Grails (groovy on Rails) on the Netweaver system, as Composition on Grails. With this tool it is possible to applications with a Webdynpro look and feel. Grails is a framework for creating webapps with less coding.

Groovy is a scripting language designed on the basis of Java. Groovy script is compiled into Java classes, and both Java and Groovy can be mixed. This makes the implementation easier, just start writing Java and when you feel like use some of the smarter features of Groovy you can use them.

While I was looking at Grails, I thought that I would be possible to use it in PI. One place could be in java mappings. I’ll describe the steps that I have taken to implement this.

  1. Download and install the groovy library
  2. Get the Groovy plugin to Eclipse, this make developing much easier.
  3. Create a new Eclipse project
  4. Insert the aii_map_api.jar in the project, to be able to implement Streamtransformation service.
  5. Create a new Groovy file in the source folder, with the name GroovyMapIdoc.groovy, then Eclipse know that it is a groovy file.
  6. Create the mapping of your file. I have attached my example code bellow.
  7. Compile the Groovy files using the context menu on the GroovyMapIdoc.groovy file.
  8. Zip the content of the bin-groovy in the project folder and upload it, as an imported archive in the Integration builder. Alternative use ant build to create the zip files.
  9. Upload the two files Groovy-1.6.1.jar and asm-2.2.3.jar as imported archives. They can be found in <GROOVY_HOME>\lib
  10. Activate and use the mapping.

I would expect people trying this to have a good knowledge of using XI or PI Java mappings, because it is a requirement for the development of mappings.

One example I always have considered, was my first challenging mapping experience. Posting financial post with more than 1000 lines to the FIDCCP02 idoc. The FIDCCP02 only accepts 999 lines. The posting can be created multiply idocs with 998 lines and the post a balance on each item. This way all documents will balance.

The document is transformed from the left document to the right. I have for this example used a max size of 3 to make testing easier.

The code that I have used for the mapping is.

package com.figaf.mapping

import com.sap.aii.mapping.api.StreamTransformation;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.Map;

import groovy.xml.MarkupBuilder

class GroovyMapIdoc implements StreamTransformation{

   Map param;
   
    void setParameter(Map param) {
        this.param = param;
    }
    // Number of lines pr idoc
    def step=3
   
    /**
     * Implementation of the execution method
     */
    void execute(InputStream input, OutputStream out) {

        // Parse the input using the XMLSlurper
        def FICCP01 = new XmlSlurper().parse(input)
        // get the different lines using the GPath
        def Lines = FICCP01.IDOC.LINE
        // create a writer example
        def writer = new OutputStreamWriter(out)
       
        def xml = new MarkupBuilder(writer)
        // create the root element and fill data into it.
        xml.FICCP01(){
            // get the number of idocs to be created.
            def numIdocs =   Lines.size()/step + (Lines.size()%step>0?1:0)  
            // loop for each idoc
            for ( i in 0..numIdocs-1 ) {
                // find the limit for the current idoc
                def max = Math.min( Lines.size(), i* step+2)
                // create sum ellement to create balances
                def sum = 0.0;
                 def lineno=1;
                IDOC(){
                    // create the number segment, using GPATH
                    NR(FICCP01.IDOC.NR )
                    // for each line in the range do the following
                    Lines[i*step..max].each{oldline->
                         // create a  new Line node, in the out put element
                         // with the following content
                        LINE(){
                        NO(lineno++)
                        Text(oldline.Text.text())
                        Amount(oldline.Amount.toBigDecimal())
                    }
                   // update the sum
                  sum +=oldline.Amount.toBigDecimal()
                }    
                    // create a balancing line, with balances the result
                    LINE(){
                        NO(step+1)
                        Text('Balance')
                        Amount(-sum)
                    }
                }
            }
        }

        // write the xml to output
         writer.flush()
         writer.close()
       
    }
   
}

Behind the scenes the Groovy file is changed in to java classes. Because Java does not support Closures natively different subclasses are created. Try to have a look on them using a decompiler like jad.

Conclusion

Groovy could be a way to improve the how java mappings are created. The XML generation is easier to handle then how it would have been created in Java and it is more powerful than XSLT. It takes some effort to get use to the closures concept of Groovy and the other notation, but it seems to work real well.

I don’t think the performance issue with the mapping is a problem. There is an overhead to load the groovy libraries and the code is probably not as optimized if it was written directly in java. I have not made any measurements for this.