Showing posts with label PeopleSoft. Show all posts
Showing posts with label PeopleSoft. Show all posts

Friday, April 17, 2015

ServiceMix - first prod integration deployed

I have been pushing to replace WebMethods within our organization for a few years now and this is finally the first step - getting ServiceMix into production.  Unfortunately, this initial integration isn't  replacing any WebMethods integration but it got ServiceMix into production though.

Some minor details on the integration.

Tech used:
  • Linux
  • Java 8 
  • ServiceMix 5.4.x
  • Camel
    • SQL Component
  • PeopleSoft psjoa.jar and the associated jar of generated component interface definitions
The biggest challenge was getting the PeopleSoft aspects usable in a real OSGI context.  It isn't perfect but it is functional. 


Here is a simple diagram of the OSGI related dependencies.
Since PeopleSoft integrations using psjoa.jar require using the jar that exactly matches the version of PeopleTools in use; I used the PeopleTools version for the OSGI version of the exported packages.  I used the maven bndtools plugin to convert the jar into an OSGI bundle.  The only painful part of this is that I ran into a need to utilized the dynamic-import to pick of some internal references which were causing problems otherwise.  The interesting aspect is that what was picked up appears to be things like JMS items and similar things.  I am guessing that psjoa.jar does some stuff with Class.forName() with regard to some optional functionality it provides and it still causes OSGI issues for some unknown reason.
 
 The PeopleSoft component interface definitions were a much larger headache.  The code is generated and you have no control of the code/package naming - all CI files are produced into the package: PeopleSoft\Generated\CompIntfc. 

 The CI files are not necessarily tied to a particular PeopleTools version - the file can be used for later PeopleTools versions as long as nothing is structurally different between CIs defs and the PeopleSoft server side definition that you use.  The main issue here is that if you have multiple PeopleSoft ERP systems (i.e. HR, Financials, etc) then the set of  CI definitions is different but the Java side package must be the same.  This made it harder to support multiple PeopleSoft systems (with different PeopleTools versions) concurrently in one ServiceMix instance.  I am not  trying to run multiple integrations like this at the moment but I need to.  I think I worked about the problem but it required the use of "required-bundle" - so Finance CI definitions go into a jar like finance-ci-<version>.jar and HR definitions go into a jar like hr-ci-<version>.jar.  They both contain the same java packages so in dependent integrations I import the package and I must also be sure to do a require-bundle on the jar for the system I am interested in.  I think I have it working but need to do some further validation.  Without doing this, I would likely have to do some sort of version scheme which distinguishes the various ERP systems of interest - that seemed awkward as well so for now I am doing it this way.

Back to the overall integration.  Nothing fancy, some database triggers generate records into a table which acts as input to the ServiceMix integration.  ServiceMix (Camel SQL) has a route which runs a regular SQL against the table and selects the data into the route.  The route takes the data and does some work with the PeopleSoft instance and if no error/exception resulted then the route marks the originating data as processed.  If something goes wrong, the route marks the failed row as failed processing. 

One thing that isn't working for  me right now is that I intended to have the route delete the source rows on success but that is not working at the moment - not sure why.  I am wondering if it is a bug in Camel.  I did check the Camel unit tests and there is a test matching basically what I am doing but the test is done via Java code versus using Blueprint - it seems like it shouldn't matter but for lack of a better answer.  Or it could be differences in database/drivers (maybe far fetched - not doing anything wild here).  I did note in documentation that it looks like there are so limitations on the number of parms to the queries specified to the SQL/DML.  I just don't have time to debug it for now.  I will probably write a quick batch job to run every so often and cleanup the completed data and  generate a report on failures, counts, timings, etc.

 There are still a number of setup changes which will likely occur to the environment as we implement more complex integrations but at least we started the process - I am very thankful for that.

Thanks for checking this out; hope it was interesting and maybe even helpful in some way.

God bless!

Scott


Wednesday, January 7, 2015

Java web app integration with PeopleSoft - modularity evolution

A good chunk of my work over the last few years has involved producing Java web applications which work with PeopleSoft.  This post documents the basic module/jar breakdown that I tend to start with and some indication of changes to support future needs. 

The basic war/jars with sample descriptions breakdown like this:


The above works ok in general and has facilitated code sharing between multiple applications.  Some changes have occurred over time resulting in some benefits.  Originally, there were a few API's based on static methods - the most prominent being SQL utilities.  After getting the basic idea working, I converted it to interfaces and implementations.  The resulting benefits included an improvement in testability and multiple concurrent database type support [Oracle and HSQL mainly]. This should have been done using non-static methods initially but at least it was a straight forward conversion in the end.

There are still a few small static API's which make sense with the existing application designs and implementations.  These cover some string and date handling and such which work fine with static method implementations.  There are larger problems to solve instead of worrying about small static utilities.

The above describes a pretty good and time tested design.  There are drivers of change now though that are pushing me to newer and more flexible designs.  The drivers include:
  • budget reductions
  • lack of adequate staff
  • excessively short development time frames
  • integrations increasing in quantity and complexity
  • increasingly higher application/feature availability
With that said, the general solutions I am pursuing to meet the needs now involve more web services, OSGi and various other messaging/integration technologies.

Now that I am working with OSGi, I am considering it for more general usage at the application level.  My initial plans involved evolutionary changes to existing systems but early prototyping found some issues with that.  I am finding that a number of previous decisions would need to be revisited for existing code bases. That isn't necessarily bad - some types of changes are likely more inline with current general best practices.  In any case where a major impedance mismatch occurs with OSGi; it may be time to reevaluate the implementation or feature. 

The issue with the largest scope so far seems to be due to bundling API interfaces with API implementation in the same jar file.  The OSGi way seems counter to this.  This does affect static API's but it also affects other non-static API's if you didn't plan ahead.  The "best" packaging appears to be separate jar files for API interface and implementations.

Currently, the most annoying issue tied to OSGi is related to the way I handled caching.  I wanted to use a "standard" caching API but JSR 107 was not released as final when I started my cache support.  I ended up using a snapshot of JSR 107 which worked wonderful in several non-OSGi applications.

The use of the snapshot along with some OSGi issues related to Infinispan dependencies resulted in some "irreconcilable differences".  I have yet to find a way to get it working without repackaging something with dummy version info and/or reworking some code.  For now I am putting this support on hold until I have time "to do it right".  There is a newer Infinispan out now which may help (or not) - will figure it out later.  This could be a good time to reevaluate the caching support; most of the caching support was done to handle the fact that in the past the ERP system could barely function normally and external apps/integrations had it near collapse at high utilization times.  Hardware has significantly improved and this hasn't been the case for some time with the ERP. 

Anyways, the expected plan is to start splitting out most existing functionality in OSGi compatible ways. Interfaces and implementations in separate jars and at a more fine grained functionality level. This should be pretty straight forward for large portions of existing code bases;  I suspect the 80/20 rule will apply in this.  I think the initial priority will be some of the utility code which will be useful with some new web services and other integrations.

God bless and Happy coding!
Scott



Monday, November 17, 2014

Apache ServiceMix - intial thoughts

So you have a really expensive (purchase price) integration product which has ridiculously high yearly maintenance fees in a public service type organization which isn't out to make money.

What can you do to bring costs down and provide better value to the public/consumers?

In this case, I am evaluating replacing the commercial product with Apache ServiceMix.

My first impression is that there is a lot of value bundled with ServiceMix and the wide assortment of technologies it works with.  The initial major downside appears to be a lack of equivalent tooling for handling the details of various DB trigger based integrations.  The existing commercial tool is more technical-end-user / operations oriented than ServiceMix which is developer oriented.  For this organization, this is a somewhat painful tradeoff but hopefully methods to reduce the pain will be found over time.

Other items to note include there is a general expectation that Maven is used; the ServiceMix examples involve its use.  I decided to include Eclipse in the mix since a number of people will be involved in this and I am hoping that a GUI may help in the transition.  I may regret the Eclipse / Maven integration; it has provided a few pain points in getting started.  If there is a budget for more formal training, it would likely be very worthwhile.

Primary aspects of ServiceMix functionality I want to evaluate:
  • OSGI and the impact on the development and operations processes
  • CXF web service implementations
  • Camel routes to tie some of the functionality together
  • ActiveMQ for some internal messaging/high availability/reliability needs
  • Activiti for performing some more complex workflows
I'm not an expect (yet!) in any of these but I have been slowly starting to work myself through the details.  I've been spending a little time over the last month or so getting a better grasp of the technology and if/how it may fit into our needs. Below is just my brief overview - I am planning to more in-depth posts as time permits.

OSGI is a lot of things but it all starts with modularity.  In the case of ServiceMix, Karaf is the underlying core OSGi container.  I am focusing on Karaf 2.4.0 (in ServiceMix 5.3.0) for now; it is a recent release which is sort of a bridge between major differences between 2.3.x and 3.0.x.  It supports the OSGi R5 specification and has fairly recent dependencies.  I am not convinced that we really need some of the features in R5+ right now but hopefully this version may ease any later upgrade we need to perform.  There are some supporting technologies which I am looking into as well.  The Apache Cave and Cellar projects are partly what drew me into taking a closer look.  The Cave project is an OSGi bundle repository which I am still figuring out how it fits in compared to a plain Maven repository or repository manager such as Artifactory.  The Cellar project is for clustering Karaf.  This may be a good method of gaining scalability and availability in a production environment.

I have been prototyping a few things but decided to take a chance on a few books to see if they could provide a little extra insight or provide quicker/more clearly.  The books in question are
  • Enterprise OSGi in Action - Manning Publishing
  • Learning Karaf Cellar -  PACKT Publishing
  • Learning Apache Karaf - PACKT Publishing
  • Apache Karaf Cookbook - PACKT Publishing
Sort of funny but I am reading them in the above order.

I had picked up Enterprise OSGi in Action quite a while back and had skimmed it but had no time to really put it to use.  I think it is a decent book and was my main initial exposure to much of the terminology.  I do find that there are things that didn't become clearer until reading a few other books/resources. 

I found Learning Karaf Cellar to be a reasonable book.  It did clarify and confirm a few things I was wondering.  It didn't answer all my questions but got me a little further than I was.  One area of clarification includes the need/use of the cellar-eventadmin feature - in my prior testing I had installed it but the book seems to neglect it.  Most of what I was really looking for (at the moment) was in the last 20 pages or so of the book.

The Learning Apache Karaf has a few items I did not know.  There is overlap with the Cellar book but that isn't too terrible.  Since I have been playing with Karaf for a bit over a month now some of the basic command info wasn't needed but if you were starting from nothing would be useful.

I'm just poking around the Cookbook for now.  It tries to point out differences between Karaf 2.x and 3.0 but may be more 3.0 focused.  Some of the items are interesting and I may try to evaluate or leverage.  Will have to really read more in depth before commenting much more on it.

Between my research and prototyping, I am thinking that my desire to use the Distrubuted OSGi (DOSGi) enterprise functionality may be premature (with regard to remote services mainly).  I think just clustering Karaf with Cellar and putting it behind a load balanced will likely provide most of what I am thinking of for an initial deployment.  I also recognize that the default Cellar setup isn't fully capable of meeting my initial expectations - mainly by not persisting some settings/config to disk.  The Cellar book discusses that aspect of their use of Hazelcast a bit which is nice but I will likely have to find some Hazelcast specific documentations to get the details I need.  I am still trying to  work out if/how declarative services should fit in to my plans (versus Spring or Blueprint).

Onto CXF; I have a service and client in prod using this but they are outside of any OSGi environment.  I have prototyped a couple other services in OSGi with CXF and am very pleased with some aspects.  One big plus is with PeopleSoft; I must be able to utilize multiple versions of the PeopleSoft provided API jars for accessing application servers.  OSGi allows me to do that; allowing me to create services targetting multiple PeopleSoft instances with different PeopleTools versions and have those services deployed in the same process.

For Camel, I am still working on how to best utilize it.  I think that creating a number of fine grained services in place of the few but beast like monolithic services is a better way to go.  With those in place, I can use Camel to tie them together to match existing data flows.  Combined with OSGi, there is a distinct dynamism which we don't currently have which hopefully speed up turn-around on changes.

I am still considering ActiveMQ for our environment.  It is a touch call for whether the cost of potential lost transactions is higher than the implementation and operations cost of using ActiveMQ.  I am really torn on this; it would probably benefit our users on likely rare occasions but would incur a cost to manage/maintain.  Still an item of active consideration.

There are a few places where real workflow would just make a lot of sense and using appropriate tools likely will be a big benefit here versus just codifying it.  That is the reason I would like to look into Activiti.  It will not do much without changes to some of our applications and those are likely big changes.  This is a longer term target.

Adopting something like this will have an effect on our development processes/environment, testing environments, production setups and deployment/maintenance methods.  I'm planning on trying to document some of the details of processes and procedures I am going through during my research and planning. 

God Bless!
Scott

Monday, November 19, 2012

Java and PeopleSoft

Let me start with stating that I don't consider myself an expert in PeopleSoft.  I have been working on applications which integrate with PeopleSoft (mainly Campus Solutions / HR) for a little over 6 years though.  My background is mostly Java/C++ and not PeopleSoft.  During my time having to work with PeopleSoft I have learned a number of things that have made working with PeopleSoft a little easier in my opinion.

Some background first .  My first experience with PeopleSoft was when I was hired to convert a mixed implementation language application from PeopleSoft 8.0 to 8.9.  This provided many challenges to say the least - some related directly to PS but not all.  After getting the conversion completed, it was already apparent that the application should be fully redesigned.  After laying out the various choices and reasoning, management agreed that a Java/Linux redesign had the go ahead.

The Java based redesign has been in production a little more than 3 years now.  Since then we have moved to PeopleSoft Campus Solutions/HR 9.0 with PeopleTools 8.51. 

The redone application infrastructure has been fairly stable during that time.  The basics are Linux, Java  1.6+, Oracle 9-11, an OpenSource application server and a HW load balancer.

Data resides in a combination of PS delivered tables and custom tables in the same database created using AppDesigner.  This was done because of a requirement that some of the data needed to be accessible to a group of functional users (using a combination of PS tools and other SQL query tools).

The data access method we chose was direct SQL reads from the database against delivered tables for "translate" values and lookup tables along with our custom tables.  We use direct inserts/updates against our custom tables.  To update delivered PS tables, we used a combination of delivered ComponentInterfaces (if available and appropriate) and custom ComponentInterfaces that I created against the delivered components.

For anyone wondering, a ComponentInterface(CI) is sort of like a DB view; it limits the data that is visible and provides a place perform some types of activities.  A componenMy use was simply to provide access to specific data items.  When choosing or creating a CI, I highly recommend that you ONLY include the bare minimum of data you need to interact with.  If you choose to create a CI which includes all the underlying records, you can expect the CI to become invalid after applying PeopleSoft bundles (i.e. updates, upgrades) at some point in time.  If PS changes a field referenced by your CI (and even if not used by you) - the CI will at times stop functioning.  You will usually find odd errors logged (if you are lucky) indicating that the CI could not be accessed.  If you load the CI in AppDesigner, many times it will automatically fix some issues (and in some cases it won't tell you what it "fixed").  Other times, you may need to go through the records and field in the CI while looking for an indicator (like a field no longer exists) and manually remove/fix it.

Here is a list of other things to note/remember:
* You use AppDesigner to generate the Java classes that represent the CI's and records.
* The generated code requires the PS psjoa.jar which comes with your PS installation.
* For anyone creating new CI's, don't forgot to set the CI security in PS.   
* When accessing a CI to retrieve data, remember that the underlying system tends to load data unrelated to what you are accessing as well.  You should reference the manuals or take a PeopleTools I/II class to better understand the internal workings.
* When saving data via a CI, the save can fail due to PS validations unrelated to what you are trying to save(this is related to my previous note).  You will likely have no indication of the problem in this case (from a Java perspective).  The only way to find the issue quickly is to perform the matching operation in the PS application itself (if possible).  The reason behind the problem relates to PS changing validation logic over time and in some cases PS doesn't convert things appropriately.  The longer you have PS and the more upgrades you go through, the more likely you are to run into odd problems with data.
* For us, creating the entire set of Java CI's and supporting objects results in the generation of a combination of over 7000 Java classes and interfaces.
* AppDesigner sometimes generates code which isn't valild and you will need to tweak it (sometimes methods get duplicated for some reason).  I'm not sure if this is truly a PS problem or if maybe a PeopleTools data has a bit of corrupted data somewhere).
* The fact that each and every type of PS collection results in a new collection class is inconvenient at best.  The best way I found to create reusable code that works with the PS collections is to create a utility class (or classes) which use Java reflection (and generics to ease things). I may update this later with more detail.
* Don't forget that PS doesn't use NULL for character fields - it uses a single space.  An "isEmpty" type utility method which checks for NULL or a single space is handy. 
* There are some places where PS has odd choices for key structure.  It seems like remnants of decisions from long ago.  An example of this includes "service indicators" - (from memory) they have a date/time type field which doesn't have enough resolution to prevent duplicate keys so I ended up having to force a delay between multiple service indicator assignments so duplicate keys were not generated (which cause a save failure).

I'm sure there is much more I am forgetting.  I hope to update this over time to try and retain some of the more odd ball items.