Software Development, family, religious, hobby, fun and humorous items.
Tuesday, November 20, 2012
Hibernate versus Apache Commons DBUtils
When is it time to move from a simple framework to something like Hiberate, even if you only use a subset of the functionality?
In many of the applications I have to deal with, a full ORM tool is overkill. At the same time, using straight JDBC is repetitive and error prone. Some years ago, I started using Apache DBUtils which was an improvement over JDBC but still lacked some things. I decided to create a wrapper around DBUtils which removed some of my remaining complaints. Over the last few years, this wrapper has slowly morphed and increased in scope and complexity to where is better described as a framework. It is responsible for significantly reducing errors and has eliminated certain types of errors completely.
My problem now is when an application is borderline between needing an ORM or not, which direction should be taken. I have been going through Hibernate documentation and I realized that much of what DBUtils does can likely be done with Hibernate (thinking in terms of native SQL and POJO objects). This is what I was reviewing when I started thinking about it.
On further thought, it might be worthwhile to replace DBUtils with Hibernate in my current framework. This may allow an easier transition to the full Hibernate functionality if the need arises.
Additional features provided by my framework:
* Handle some additional simple data conversions
* Provides Java enum based references to target databases
* SQL ResourceBundle & MessageFormat support with queries referenced via Java enum.
* Ability to generate nested maps from a returned POJO list.
What is missing:
* Caching of query results using a less labor intensive method
This is a good start - more updates to follow.
Monday, November 19, 2012
Java and PeopleSoft
Let me start with stating that I don't consider myself an expert in PeopleSoft. I have been working on applications which integrate with PeopleSoft (mainly Campus Solutions / HR) for a little over 6 years though. My background is mostly Java/C++ and not PeopleSoft. During my time having to work with PeopleSoft I have learned a number of things that have made working with PeopleSoft a little easier in my opinion.
Some background first . My first experience with PeopleSoft was when I was hired to convert a mixed implementation language application from PeopleSoft 8.0 to 8.9. This provided many challenges to say the least - some related directly to PS but not all. After getting the conversion completed, it was already apparent that the application should be fully redesigned. After laying out the various choices and reasoning, management agreed that a Java/Linux redesign had the go ahead.
The Java based redesign has been in production a little more than 3 years now. Since then we have moved to PeopleSoft Campus Solutions/HR 9.0 with PeopleTools 8.51.
The redone application infrastructure has been fairly stable during that time. The basics are Linux, Java 1.6+, Oracle 9-11, an OpenSource application server and a HW load balancer.
Data resides in a combination of PS delivered tables and custom tables in the same database created using AppDesigner. This was done because of a requirement that some of the data needed to be accessible to a group of functional users (using a combination of PS tools and other SQL query tools).
The data access method we chose was direct SQL reads from the database against delivered tables for "translate" values and lookup tables along with our custom tables. We use direct inserts/updates against our custom tables. To update delivered PS tables, we used a combination of delivered ComponentInterfaces (if available and appropriate) and custom ComponentInterfaces that I created against the delivered components.
For anyone wondering, a ComponentInterface(CI) is sort of like a DB view; it limits the data that is visible and provides a place perform some types of activities. A componenMy use was simply to provide access to specific data items. When choosing or creating a CI, I highly recommend that you ONLY include the bare minimum of data you need to interact with. If you choose to create a CI which includes all the underlying records, you can expect the CI to become invalid after applying PeopleSoft bundles (i.e. updates, upgrades) at some point in time. If PS changes a field referenced by your CI (and even if not used by you) - the CI will at times stop functioning. You will usually find odd errors logged (if you are lucky) indicating that the CI could not be accessed. If you load the CI in AppDesigner, many times it will automatically fix some issues (and in some cases it won't tell you what it "fixed"). Other times, you may need to go through the records and field in the CI while looking for an indicator (like a field no longer exists) and manually remove/fix it.
Here is a list of other things to note/remember:
* You use AppDesigner to generate the Java classes that represent the CI's and records.
* The generated code requires the PS psjoa.jar which comes with your PS installation.
* For anyone creating new CI's, don't forgot to set the CI security in PS.
* When accessing a CI to retrieve data, remember that the underlying system tends to load data unrelated to what you are accessing as well. You should reference the manuals or take a PeopleTools I/II class to better understand the internal workings.
* When saving data via a CI, the save can fail due to PS validations unrelated to what you are trying to save(this is related to my previous note). You will likely have no indication of the problem in this case (from a Java perspective). The only way to find the issue quickly is to perform the matching operation in the PS application itself (if possible). The reason behind the problem relates to PS changing validation logic over time and in some cases PS doesn't convert things appropriately. The longer you have PS and the more upgrades you go through, the more likely you are to run into odd problems with data.
* For us, creating the entire set of Java CI's and supporting objects results in the generation of a combination of over 7000 Java classes and interfaces.
* AppDesigner sometimes generates code which isn't valild and you will need to tweak it (sometimes methods get duplicated for some reason). I'm not sure if this is truly a PS problem or if maybe a PeopleTools data has a bit of corrupted data somewhere).
* The fact that each and every type of PS collection results in a new collection class is inconvenient at best. The best way I found to create reusable code that works with the PS collections is to create a utility class (or classes) which use Java reflection (and generics to ease things). I may update this later with more detail.
* Don't forget that PS doesn't use NULL for character fields - it uses a single space. An "isEmpty" type utility method which checks for NULL or a single space is handy.
* There are some places where PS has odd choices for key structure. It seems like remnants of decisions from long ago. An example of this includes "service indicators" - (from memory) they have a date/time type field which doesn't have enough resolution to prevent duplicate keys so I ended up having to force a delay between multiple service indicator assignments so duplicate keys were not generated (which cause a save failure).
I'm sure there is much more I am forgetting. I hope to update this over time to try and retain some of the more odd ball items.
Some background first . My first experience with PeopleSoft was when I was hired to convert a mixed implementation language application from PeopleSoft 8.0 to 8.9. This provided many challenges to say the least - some related directly to PS but not all. After getting the conversion completed, it was already apparent that the application should be fully redesigned. After laying out the various choices and reasoning, management agreed that a Java/Linux redesign had the go ahead.
The Java based redesign has been in production a little more than 3 years now. Since then we have moved to PeopleSoft Campus Solutions/HR 9.0 with PeopleTools 8.51.
The redone application infrastructure has been fairly stable during that time. The basics are Linux, Java 1.6+, Oracle 9-11, an OpenSource application server and a HW load balancer.
Data resides in a combination of PS delivered tables and custom tables in the same database created using AppDesigner. This was done because of a requirement that some of the data needed to be accessible to a group of functional users (using a combination of PS tools and other SQL query tools).
The data access method we chose was direct SQL reads from the database against delivered tables for "translate" values and lookup tables along with our custom tables. We use direct inserts/updates against our custom tables. To update delivered PS tables, we used a combination of delivered ComponentInterfaces (if available and appropriate) and custom ComponentInterfaces that I created against the delivered components.
For anyone wondering, a ComponentInterface(CI) is sort of like a DB view; it limits the data that is visible and provides a place perform some types of activities. A componenMy use was simply to provide access to specific data items. When choosing or creating a CI, I highly recommend that you ONLY include the bare minimum of data you need to interact with. If you choose to create a CI which includes all the underlying records, you can expect the CI to become invalid after applying PeopleSoft bundles (i.e. updates, upgrades) at some point in time. If PS changes a field referenced by your CI (and even if not used by you) - the CI will at times stop functioning. You will usually find odd errors logged (if you are lucky) indicating that the CI could not be accessed. If you load the CI in AppDesigner, many times it will automatically fix some issues (and in some cases it won't tell you what it "fixed"). Other times, you may need to go through the records and field in the CI while looking for an indicator (like a field no longer exists) and manually remove/fix it.
Here is a list of other things to note/remember:
* You use AppDesigner to generate the Java classes that represent the CI's and records.
* The generated code requires the PS psjoa.jar which comes with your PS installation.
* For anyone creating new CI's, don't forgot to set the CI security in PS.
* When accessing a CI to retrieve data, remember that the underlying system tends to load data unrelated to what you are accessing as well. You should reference the manuals or take a PeopleTools I/II class to better understand the internal workings.
* When saving data via a CI, the save can fail due to PS validations unrelated to what you are trying to save(this is related to my previous note). You will likely have no indication of the problem in this case (from a Java perspective). The only way to find the issue quickly is to perform the matching operation in the PS application itself (if possible). The reason behind the problem relates to PS changing validation logic over time and in some cases PS doesn't convert things appropriately. The longer you have PS and the more upgrades you go through, the more likely you are to run into odd problems with data.
* For us, creating the entire set of Java CI's and supporting objects results in the generation of a combination of over 7000 Java classes and interfaces.
* AppDesigner sometimes generates code which isn't valild and you will need to tweak it (sometimes methods get duplicated for some reason). I'm not sure if this is truly a PS problem or if maybe a PeopleTools data has a bit of corrupted data somewhere).
* The fact that each and every type of PS collection results in a new collection class is inconvenient at best. The best way I found to create reusable code that works with the PS collections is to create a utility class (or classes) which use Java reflection (and generics to ease things). I may update this later with more detail.
* Don't forget that PS doesn't use NULL for character fields - it uses a single space. An "isEmpty" type utility method which checks for NULL or a single space is handy.
* There are some places where PS has odd choices for key structure. It seems like remnants of decisions from long ago. An example of this includes "service indicators" - (from memory) they have a date/time type field which doesn't have enough resolution to prevent duplicate keys so I ended up having to force a delay between multiple service indicator assignments so duplicate keys were not generated (which cause a save failure).
I'm sure there is much more I am forgetting. I hope to update this over time to try and retain some of the more odd ball items.
Wednesday, November 7, 2012
Caching quagmire
There are a few drastically different views on caching ranging from it is a crutch for poor designs to being a key design aspect which supports scalability in big data environments. I take a practical view, use what makes the most sense for the problem at hand.
My particular experience involves mostly 2 cases. An application was created which was did not scale well. A lack of time to reengineer it left caching as the most expedient solution. The other case involves an ERP system which wass stretched to the limits on fairly expensive hardware leaving very little capacity for data manipulation by externally integrated applications and integrations. Limited money combined with substantial growth in the user base led to caching as a way to minimize the impact of non-ERP application overhead.
Memory tends to be cheaper than large quantities of CPU cores so my organization has a large investment in low/mid-range servers with memory sizes in the 8-32GB+ range.
Where this takes us at the moment is an adhoc collection of solutions which include Memcached, EhCache and JBoss Infinispan. I find supporting Memcached with a specific Java application of ours overly painful. Maint on servers or unplanned downtime tends to result in problems with data issues. I can't blame this fully on Memcached - some developers decided to use Memcached like a DB in some functionality which was a mistake. EhCache worked OK but I have always found the documentation lacking and trying to figure out feature/licensing aspects between it and Terracotta drove me to find something more straight forward. The "phone home" feature of EhCache is also a little disconcerting even though there is a way to disable it. Anyways, I have written a small JSR 107 wrapper allowed me to convert one application from EhCache to Infinispan without any real difficulty. I can't say that it made any real performance difference but that wasn't the problem which needed solving. The conversion from Memcached to Infinispan is being planned - that has some challenges due to some of the odd ways it is used. My preference is to keep the Infinispan cache in-process with the application on each cluster node (at least until I can take a broader look at things and determine if a more grid like environment serving data to our entire application environment makes sense). I'll have to post updates as the process evolves.
I recently took a quick look at some info on Hazelcast and it has some interesting aspects. It seemed a little easier to configure (less options and some defaults which may make sense for us). I may take a closer look at this down the road.
My particular experience involves mostly 2 cases. An application was created which was did not scale well. A lack of time to reengineer it left caching as the most expedient solution. The other case involves an ERP system which wass stretched to the limits on fairly expensive hardware leaving very little capacity for data manipulation by externally integrated applications and integrations. Limited money combined with substantial growth in the user base led to caching as a way to minimize the impact of non-ERP application overhead.
Memory tends to be cheaper than large quantities of CPU cores so my organization has a large investment in low/mid-range servers with memory sizes in the 8-32GB+ range.
Where this takes us at the moment is an adhoc collection of solutions which include Memcached, EhCache and JBoss Infinispan. I find supporting Memcached with a specific Java application of ours overly painful. Maint on servers or unplanned downtime tends to result in problems with data issues. I can't blame this fully on Memcached - some developers decided to use Memcached like a DB in some functionality which was a mistake. EhCache worked OK but I have always found the documentation lacking and trying to figure out feature/licensing aspects between it and Terracotta drove me to find something more straight forward. The "phone home" feature of EhCache is also a little disconcerting even though there is a way to disable it. Anyways, I have written a small JSR 107 wrapper allowed me to convert one application from EhCache to Infinispan without any real difficulty. I can't say that it made any real performance difference but that wasn't the problem which needed solving. The conversion from Memcached to Infinispan is being planned - that has some challenges due to some of the odd ways it is used. My preference is to keep the Infinispan cache in-process with the application on each cluster node (at least until I can take a broader look at things and determine if a more grid like environment serving data to our entire application environment makes sense). I'll have to post updates as the process evolves.
I recently took a quick look at some info on Hazelcast and it has some interesting aspects. It seemed a little easier to configure (less options and some defaults which may make sense for us). I may take a closer look at this down the road.
Subscribe to:
Posts (Atom)