Friday, December 28, 2012

Medical records data - HL7

I keep thinking about a career adjustment - basically converting from more general applications development to more specialized applications.  I had considered going back to college for a masters degree in bioinformatics and trying to start over from there.  I decided that was too much disruption to my family mainly for just my benefit - at least while my kids are young.  I have considered trying to learn HL7 (mainly v3) and look for an appropriate job opportunity.  That is by the fact that my current employment takes more time than I care to admit and to the detriment of my family as well some days. 

[Update] I decided that it is more practical to look into career paths more inline with my existing eduction and some other interests.  I still find it useful to understand the technology as you never know when it will come in handy in some way.

I have looking into HL7 for a year or 2.  This was challenging since there was little freely available information/tools.  In the meantime, some of basic information is now more freely available along with a few tools.  The only other major hurdle is that I would likely learn using freely available Java resources whereas it seems, based on job postings, that most jobs are doing .NET solutions (or mainframe).  Learning .NET (versus just C/C++ on Windows from years ago) is on my list to do but time consuming as well.  My current job will likely require an increasing amount of .NET work so given time.. but this is off topic a bit.

I'm trying to keep a list of potentially useful resources (with a majority of focus on freely available resources) to learn/utilize.  I have not reviewed any of these in more than passing at this time.
  • http://en.wikipedia.org/wiki/Health_Level_7
  • http://www.hl7.org/participate/toolsandresources.cfm
  • http://sourceforge.net/projects/hl7inspector/
  • http://www.openehealth.org/display/ipf2/Home
I would have liked to join in some free webinars related to HL7 but so far they have always occurred at times when I was not able to take off from work (nor could I really justify it during my normal work day since it is not really related to my current employment).

Finding realistic sample data to learn from may be challenging.  Anyways, even if this doesn't lead to a job change, I think it is still a worthwhile to stretch my knowledge into other areas.  I find it quite amazing when some seemingly unrelated information helps me with a current difficult task.

 I'll hopefully add more interesting and useful content related to this over time.


Friday, December 21, 2012

IT and requests for Cloud Technology solutions

[2016/05/19] Update
 Ok, I'm actually finding some reasonable use cases for 3rd party cloud solutions. I still don't think it makes sense in instances where multiple large applications and lots of integration's are involved but for "point" solutions with few integration needs it may make good sense.  The cases I am thinking about (outside of family :) ) are development tools/process related.  With staffing so slim, it is pretty risky having only one person that knows how all the development and operations infrastructure is built, installed, setup and maintained. Having someone else deal with most of that means that staff have more time to focus on actual development. As is, maintenance of development tools/processes is falling to the wayside more and more for lack of time - that increases risk that something could go very wrong resulting in loss of assets or substantial disruption to systems/processes.  I will say that a cloud solution isn't a silver bullet though - it has its own risks like provider outages, network disruptions etc which can have a large negative impact on development processes too.  Which risks are more likely and have the higher negative impact?  Hard to tell right now but worth thinking about.

Anyways, use the appropriate tool for the job..
-------------------------------------------------------

This post is a bit of a general complaint against some people in IT and management.  It seems that there is an increasing tendency for people to request "cloud" applications or make requests to convert existing applications to "cloud" applications.  Why should this concern anyone?  I find it concerning because these requests are made for a specific solution instead of having a specific need or requirement where "the cloud" is the best solution.  The decisions behind any IT solutions should be made to solve specific problems or meet specific requirements.  Technology decisions based mainly on some research companies "quadrant" leads me to
  • wonder whether the solution is really a good fit for the organization
  • concern regarding the longer term viability of a solution
  • concern whether a project is setup for failure from the start
  • concern that the total/on-going cost will be wasteful of money and/or other resources
I am not saying that there is anything wrong with "cloud" technology but I am saying that many people don't understand more than the executive briefing level of information that the media pushes.  Although, if management makes a final decision that they want "cloud solutions" then I (and anyone else) should abide an employers decisions and do their best to make it work as well as possible.  If a person cannot get themselves  to do that then they should consider a different employer - but let me leave that for a different post.

The type of situation that leads me to finding a lack of in-depth understanding includes someone trying to justify a cloud solution by saying it will save money but can't back that with any further details.  If the only further justification is "we don't need to by hardware" then you are likely in the same situation.  If the application/service is a mission critical application then you may end up with a really big problem. 

If someone wants to talk about cloud solutions with me then I expect them to be willing to start the discussion with the following items at a minimum:
  1. Security requirements
    1. application access
    2. data sensitivity
  2. User base
    1. size
    2. locations
    3. types of users
      1. public
      2. internal only
      3. etc
  3. Data sources, locations and integration types
  4. Performance/capacity requirements
  5. Availability requirements
  6. Disaster recovery requirements
  7. Compliance/auditing requirements
 If the answers don't exclude cloud solutions then some cloud questions/discussion items should include:
  1. Potential internal staffing needs
    1. training
      1. developers
      2. operations/support personnel
  2. Potential network requirements
  3. cloud types
    1. 3rd party -  external cloud / hosted solution
    2. internal cloud
    3. hybrid
This last question (especially if potentially working with 3rd party/external cloud) should lead into
  1. Potential maintenance logistics/coordination
    1. If you require specific availability windows unsupported by the 3rd party then what?
    2. If you have maintenance windows for integrations/data sources then how is that handled/coordinated?
  2. Help desk / communications logistics
    1. If there is a problem, how is it reported?
    2. How are users notified?
  3. Monitoring
    1. how is it performed?  Can you leverage existing infrastructure? 
    2. Who performs it?
    3. what are the boundries?  If there is a performance issue, how do you identify whether it is a cloud issue or a data source integration issue?
  4. Contracts
    1. Do you have adequate legal counsel to help in evaluating contracts so you are not left high and dry if a problem of any sort comes up?
      1. hosting company goes out of business
      2. hosting company raises rates beyond what is affordable
      3. hosting company can no longer meet your needs
      4. hosting company not adaptable to changing technology needs
    2. Cost model
      1. Are you able to account for all changes required by your organization and still determine that there is a real cost savings?
This last item should cause you to ask - if you are forced to leave a hosting company:
  • Can you get all your data out?
  • What format is your data available in?
  • How transportable is your application/service to some other cloud vendor?    Do you have to rewrite your application from scratch? 
 This is certainly not a comprehensive list of things to consider but is meant to cause people to pause and consider more than just the short briefs on cloud computing (and technology in general for that matter).  Every organization is different so decision makers must use knowledge, experience and common sense of  their own and from people with a proven track record that work for them.



Saturday, December 15, 2012

Scala for Java properties file conversion utility

Recently it was determined that it would be best for some of our custom applications to source their text (UI labels, messages, errors, etc) from a database than from text files.  The main application of interest uses ResourceBundles in many cases and has around 10 base bundle names which are translated (somewhat incompletely) into Spanish from the main English data.  There are around 3200 data items involved.  In a previous application, I was able to simply do some regex work in notepad++ and generate some SQL scripts to populate a database table being used to back up the resource bundles (via a customized ResourceBundle.Control class).

For the application at hand though, it seemed awkward after learning a few lessons from the previous application.  The solution I chose to follow instead was a simple Scala script.  Not trying to ruin the story but I will say that the end result is pretty nice.  It allows me to rerun the process without having to worry about missing steps as was the case when doing it by hand.

Why use Scala?  It seems more "composable"?  It definitely has less boilerplate code.  Was it faster to create this than a similar Java utility - maybe not.  Scala isn't my day to day language but you won't improve without working with it.  This was a low risk opportunity to improve my Scala skills.

import scala.io._
import java.io.{File, PrintStream}
import scala.util.matching.Regex

def getlang(file: File) : String = file.getName.lastIndexOf('_') match
{   
    case -1 => "EN"
    case x:Int => file.getName.substring(x+1,x+3).toUpperCase   
}

def getbasename(file : File) : String = file.getName.lastIndexOf('\\') match
{
    case -1 => file.getName.replaceFirst("_.+", "").replaceFirst("\\..+", "")
    case x:Int =>  file.getName.substring(x+1,  file.getName.lastIndexOf('.')).replaceFirst("_.+", "").replaceAll("\\", ".")
}
val myOut = new PrintStream(new File("C:\\<path>\\<app-id>-msgs.sql"), "UTF-8" );

myOut.println("set escape on")
myOut.println("set define off")
myOut.println("truncate table <app-id>_MSGS_TBL;")

val ignoreFiles = Set("conn.properties", "log4j.properties", "packtag.properties", "struts.properties");

for (file <- new File("C:\\<path>").listFiles.filter(f => """.*\.properties$""".r.findFirstIn(f.getName).isDefined).filter(f => !ignoreFiles.contains(f.getName())))
{    
    val lang = getlang(file)
    val basename = getbasename(file)
    try
    {
        for(line <- Source.fromFile(file, "Cp1252").getLines().filter(l => l.indexOf("=") != -1 && l.trim.charAt(0) != '#'))
        {       
            val key = line.substring(0, line.indexOf("=")).trim
            val v = line.substring(line.indexOf("=")+1).trim.replaceAll("'", "''").replaceAll("&","\\&");
           
            myOut.printf("Insert into <app-id>_MSGS_TBL(LANGUAGE,COUNTRY,VARIANT,BASENAME,KEY,VAL) values ('%s',' ' ,' ','%s', '%s', NVL(N'%s', N' '));\n", lang, basename, key, v);
        }
    }
    catch
    {
        case e:Throwable => println("-- Error with file:" + file.getName)
    }
}
Notes
  • Sorry the formatting isn't optimum.  
  • I purposely removed some path/file information directly related to my employer- so this example would need minor changes to run in a real environment.
  • The source properties files only existed in the root of the src code directory so I didn't have to worry about full Java package names or traversing sub-directories.
  • The script does escape single quotes and ampersands as needed.
  • The source English text was in the base bundle but I remap that to associate it with the English language.
  • The script ignores some non-text resource related files which I identified upfront.
  • This skips comment lines.
  • I coded the insert statement to never insert null into the val column since I defined it as non-null.  A single space was used in place of null.  This just fits our environment better.
  • Not a lot of error handling in place.
  • The original resource authors apparently used the cp1252 encoding instead of the expected iso-8859-1 encoding which caused some confusion briefly. 
  • Execution of the resulting SQL script identified issues such as duplicate property keys/values.  Each of those was researched and resolved manually (since some had different text for same key).





Tuesday, November 20, 2012

Hibernate versus Apache Commons DBUtils


When is it time to move from a simple framework to something like Hiberate, even if you only use a subset of the functionality?

In many of the applications I have to deal with, a full ORM tool is overkill.  At the same time, using straight JDBC is repetitive and error prone.  Some years ago, I started using Apache DBUtils which was an improvement over JDBC but still lacked some things.  I decided to create a wrapper around DBUtils which removed some of my remaining complaints.  Over the last few years, this wrapper has slowly morphed and increased in scope and complexity to where is better described as a framework.  It is responsible for significantly reducing errors and has eliminated certain types of errors completely.

My problem now is when an application is borderline between needing an ORM or not, which direction should be taken.  I have been going through Hibernate documentation and I realized that much of what DBUtils does can likely be done with Hibernate (thinking in terms of native SQL and POJO objects).  This is what I was reviewing when I started thinking about it.  

On further thought, it might be worthwhile to replace DBUtils with Hibernate in my current framework.  This may allow an easier transition to the full Hibernate functionality if the need arises.

Additional features provided by my framework:
* Handle some additional simple data conversions
* Provides Java enum based references to target databases
* SQL ResourceBundle & MessageFormat support with queries referenced via Java enum.
* Ability to generate nested maps from a returned POJO list.

What is missing:
* Caching of query results using a less labor intensive method

This is a good start - more updates to follow.

Monday, November 19, 2012

Java and PeopleSoft

Let me start with stating that I don't consider myself an expert in PeopleSoft.  I have been working on applications which integrate with PeopleSoft (mainly Campus Solutions / HR) for a little over 6 years though.  My background is mostly Java/C++ and not PeopleSoft.  During my time having to work with PeopleSoft I have learned a number of things that have made working with PeopleSoft a little easier in my opinion.

Some background first .  My first experience with PeopleSoft was when I was hired to convert a mixed implementation language application from PeopleSoft 8.0 to 8.9.  This provided many challenges to say the least - some related directly to PS but not all.  After getting the conversion completed, it was already apparent that the application should be fully redesigned.  After laying out the various choices and reasoning, management agreed that a Java/Linux redesign had the go ahead.

The Java based redesign has been in production a little more than 3 years now.  Since then we have moved to PeopleSoft Campus Solutions/HR 9.0 with PeopleTools 8.51. 

The redone application infrastructure has been fairly stable during that time.  The basics are Linux, Java  1.6+, Oracle 9-11, an OpenSource application server and a HW load balancer.

Data resides in a combination of PS delivered tables and custom tables in the same database created using AppDesigner.  This was done because of a requirement that some of the data needed to be accessible to a group of functional users (using a combination of PS tools and other SQL query tools).

The data access method we chose was direct SQL reads from the database against delivered tables for "translate" values and lookup tables along with our custom tables.  We use direct inserts/updates against our custom tables.  To update delivered PS tables, we used a combination of delivered ComponentInterfaces (if available and appropriate) and custom ComponentInterfaces that I created against the delivered components.

For anyone wondering, a ComponentInterface(CI) is sort of like a DB view; it limits the data that is visible and provides a place perform some types of activities.  A componenMy use was simply to provide access to specific data items.  When choosing or creating a CI, I highly recommend that you ONLY include the bare minimum of data you need to interact with.  If you choose to create a CI which includes all the underlying records, you can expect the CI to become invalid after applying PeopleSoft bundles (i.e. updates, upgrades) at some point in time.  If PS changes a field referenced by your CI (and even if not used by you) - the CI will at times stop functioning.  You will usually find odd errors logged (if you are lucky) indicating that the CI could not be accessed.  If you load the CI in AppDesigner, many times it will automatically fix some issues (and in some cases it won't tell you what it "fixed").  Other times, you may need to go through the records and field in the CI while looking for an indicator (like a field no longer exists) and manually remove/fix it.

Here is a list of other things to note/remember:
* You use AppDesigner to generate the Java classes that represent the CI's and records.
* The generated code requires the PS psjoa.jar which comes with your PS installation.
* For anyone creating new CI's, don't forgot to set the CI security in PS.   
* When accessing a CI to retrieve data, remember that the underlying system tends to load data unrelated to what you are accessing as well.  You should reference the manuals or take a PeopleTools I/II class to better understand the internal workings.
* When saving data via a CI, the save can fail due to PS validations unrelated to what you are trying to save(this is related to my previous note).  You will likely have no indication of the problem in this case (from a Java perspective).  The only way to find the issue quickly is to perform the matching operation in the PS application itself (if possible).  The reason behind the problem relates to PS changing validation logic over time and in some cases PS doesn't convert things appropriately.  The longer you have PS and the more upgrades you go through, the more likely you are to run into odd problems with data.
* For us, creating the entire set of Java CI's and supporting objects results in the generation of a combination of over 7000 Java classes and interfaces.
* AppDesigner sometimes generates code which isn't valild and you will need to tweak it (sometimes methods get duplicated for some reason).  I'm not sure if this is truly a PS problem or if maybe a PeopleTools data has a bit of corrupted data somewhere).
* The fact that each and every type of PS collection results in a new collection class is inconvenient at best.  The best way I found to create reusable code that works with the PS collections is to create a utility class (or classes) which use Java reflection (and generics to ease things). I may update this later with more detail.
* Don't forget that PS doesn't use NULL for character fields - it uses a single space.  An "isEmpty" type utility method which checks for NULL or a single space is handy. 
* There are some places where PS has odd choices for key structure.  It seems like remnants of decisions from long ago.  An example of this includes "service indicators" - (from memory) they have a date/time type field which doesn't have enough resolution to prevent duplicate keys so I ended up having to force a delay between multiple service indicator assignments so duplicate keys were not generated (which cause a save failure).

I'm sure there is much more I am forgetting.  I hope to update this over time to try and retain some of the more odd ball items.






Wednesday, November 7, 2012

Caching quagmire

There are a few drastically different views on caching ranging from it is a crutch for poor designs to being a key design aspect which supports scalability in big data environments.  I take a practical view, use what makes the most sense for the problem at hand.

My particular experience involves mostly 2 cases.  An application was created which was did not scale well.  A lack of time to reengineer it left caching as the most expedient solution.  The other case involves an ERP system which wass stretched to the limits on fairly expensive hardware leaving very little capacity for data manipulation by externally integrated applications and integrations.  Limited money combined with substantial growth in the user base led to caching as a way to minimize the impact of non-ERP application overhead. 

Memory tends to be cheaper than large quantities of CPU cores so my organization has a large investment in low/mid-range servers with memory sizes in the 8-32GB+ range.

Where this takes us at the moment is an adhoc collection of solutions which include Memcached, EhCache  and JBoss Infinispan. I find supporting Memcached with a specific Java application of ours overly painful.  Maint on servers or unplanned downtime tends to result in problems with data issues.  I can't blame this fully on Memcached - some developers decided to use Memcached like a DB in some functionality which was a mistake.  EhCache worked OK but I have always found the documentation lacking and trying to figure out feature/licensing aspects between it and Terracotta drove me to find something more straight forward.  The "phone home" feature of EhCache is also a little disconcerting even though there is a way to disable it.  Anyways, I have written a small JSR 107 wrapper allowed me to convert one application from EhCache to Infinispan without any real difficulty.  I can't say that it made any real performance difference but that wasn't the problem which needed solving.  The conversion from Memcached to Infinispan is being planned - that has some challenges due to some of the odd ways it is used.  My preference is to keep the Infinispan cache in-process with the application on each cluster node (at least until I can take a broader look at things and determine if a more grid like environment serving data to our entire application environment makes sense).  I'll have to post updates as the process evolves.

I recently took a quick look at some info on Hazelcast and it has some interesting aspects.  It seemed a little easier to configure (less options and some defaults which may make sense for us).  I may take a closer look at this down the road. 

Wednesday, October 24, 2012

My IT mistakes: Things I would rather forget

In IT, you can guarantee that there will be bugs, mistakes, glitches and all kinds of other foul things that will occur over time.  Here are a few that I attribute to my own mistakes.  I can only hope that someone else may learn from these.

  1. Did a technology refresh of many of the libraries in an application which performs a form of single-sign-on(SSO) to another application.  The SSO was tested and worked fine but the newer HTTP libraries handled browser detection differently.  The updated SSO system was deployed to production.  The end result was parts of the SSO target application failed to function because of problems with browser detection which had to occur during the SSO process.  
    1. I learned that when dealing with SSO functionality, the target application should be thoroughly exercised as well.
  2. Turned on some extensive logging and ran out of disk space.
    1. I learned to configure logging systems to rotate logs and maintain a fixed maximum disk usage.  I now archive/compress logs off to another filesystem if longer term retention is needed.

to be continued.


IT nightmares

These are things I have experienced that make me shake my head in disbelief.
  • Why did the power go out in the server room?
    • Because someone plugged a fan in.
  • I can't ssh to my current production server, what is going on?
    • Uh, It was accidentally decommissioned
  • Is this decommission ticket correct?
    • Ah yes, I put it in 8 months ago and it indicates it was handled and closed
  •  What happened - why did the network act up?
    • Ah, a network cable came unplugged.
  • What do you mean the manager is having a large scale demo today?  The application hasn't been tested with more than two users.
  • He wants the change in production without being tested? 
  • Why can't I ssh to the server?
    • A duplicate IP problem
  • Why do we have to shut all the servers down for?
    • The AC unit is leaking water all over the floor.
  • Why do we have to shut all the servers down again for?
    • need to redo the electrical wiring
  • Why did problem <X> occur?
    • anomaly 
    • don't know
    • etc
  • What happened to the WAN?
    • Big telco (named <X> having lots of wireless coverage) keeps hosing up their routers or has reoccurring hardware issues
  • What happened to the DB?
    • It was shutdown by mistake.
  • What happened to my test last night?
    • Without warning, desktop group had forced changes and rebooted systems overnight.
  • What happened to my 3 hours of work I forgot to save at 2am when I finally left?
    • Without warning, desktop group had forced changes and rebooted desktop
  • After sending a highly concerned email one Sunday to management and the security team upon recognizing that data within the 3500 emails I received Saturday contained information indicating someone was actively trying to hack one of our systems.. the response was...
    • Security manager: Oh, we were having a company perform a security penetration test.  The enterprise systems manager indicated he didn't think anyone was monitoring.
  • Has anyone made changes to anything (sent to server, network and DB teams)?  The application stopped talking to the DB.  I can access the DB from other applications though.
    • "Nope, no changes here".. 45 minutes later.. "Try it now".  Hm, works fine now so what was wrong.  "Oh, a static route was created that broke things and the person responsible was off today so we didn't know."

to be continued (unfortunately)




Thanks

People, organizations and companies I should give more thanks to:
  • My wife and kids
    • They put up with my long hours, constant inconvenient interruptions and the aggravated mood I end up bringing home more than any of us care for.  I can only thank them for being as patient as they have while I work toward finding a more reasonable environment.
  • Mom and Dad
    •  They put up with a lot with me.  Nice to see how great I turned out though :)
  • Joaquin M, Tim R and their families.
    • Friends who are lights to the world and I wish I could keep in better touch with.
  • Larry and Mari 
    • Miss y'all a bunch; hope that Jesus keeps you and your family close.  Lots of wonderful times at the bible studies which seem so long ago now.
  • My Grandpa Brown
    • For being a wonderful role model for Christian faith &strength and just a great Grandfather in general. 
  • Gregg J, Mark K, Clint, Jeff K and other college friends and their families
    • Many fond memories; wishing it was easier to keep in touch
  • Ray and Al from back home
    • So many good laughs and unforgettable times; wish it was easier to keep in touch with you as well. 
  • Friends I have worked with such as; Rick B, James S, Michael W (x2) and others from CA along with folks from the VCCS such as Violet M, Monica H, Kathy H, Andy C, Jamie W and others. These folks made me laugh, think differently and/or supported me in many ways.   
  • Hanover County Police and EMS
    •  Recently a pretty severe car accident occurred within a stones throw of where I was walking.  As sad as the circumstances were, it was obvious that these folks are awesome at what they do and are a huge benefit to our community.  These folks deserve more thanks than they likely receive and have my respect.  Thanks for being there!
Here are some local businesses which I am thankful for.  In a world where the $ takes precedence over almost everything else these places have not lost sight of their customers.  They have provided us wonderful service and in some cases have almost felt like family at times.
  • Mexico Restaurant - Short Pump
    • http://mexico-restaurant.com/pages/locations.html
    • Great food and wonderful staff.  We go there fairly often and they have watched our kids growing up and it is always a pleasure to be there. 
  • Extra Billy's BBQ - Midlothian
    • http://www.extrabillys.com/ExtraBillysBarBQ2.htm
    • More great food and wonderful staff.  This is another place we have had many family meals at and swapped stories with the staff. 
  • Greek Islands Restaurant - West Broad St
    • http://www.greekislandsva.com/
    • Wonderful people working there.  Feel like part of the family.  My son LOVES their pasta.  The food is great and you get a pretty good sized portion for a very reasonable cost.  Highly recommend stopping by if you get the chance. 
    • UPDATE: Jan 2015; Sadly the owners have decided to retire and closed down.  We will sorely miss both the restaurant and the couple running it.  I do wish them a wonderful retirement though!
  • Chick-fil-A - West Broad St & Short Pump
    • These folks go above and beyond on a regular basis.  Great food, friendly & fast service.  This is the best chick-fil-A I have been to.
  • Applebees - Short Pump
    • Extremely friendly staff, excellent service and food.  They have grown to remember us along with our normal meal choices.   
  • McGeorge Toyota
    • Very courteous and efficient!   They made us, as customers, feel important and they try very hard to meet our needs.  Thanks!
  • Delta Temp
    • http://www.deltatempinc.com/
    • These guys sorted out our messed up geothermal heat pump installation from another company and were more than generous in what they charged.  I wish I had started with them from the beginning.
  • Virginia Urology / Dr. Kramolowsky
    • http://www.uro.com/physicians/eugene-kramolowsky/
    • Couldn't have found a nicer doctor to help me through some kidney stones

Monday, October 15, 2012

F150 brake hoses and shocks

Phew.  After about 1 3/4 weekends, finally got the front & rear shocks and front brake lines replaced on the F150.  With around 10 years of accumulated rust, it was a bear getting some of the bolts/nuts off several shocks and one of the brake hoses.  I only had one nut which required more than a blow torch to remove - it was the driver side brake hose where it connects to the hard line.  This was the line which was nearly burst but a rubber jacket was holding the rust somewhat in place (line bulged about 1/2" more in diameter).  After using the torch on it 6 times or so, I finally got out the pneumatic high speed cutter and cut down the center-line of the connection on the soft-hose side.  Once it cut into the threads a bit, it relieved enough stress on the connection that I was able to back the nut/bolt out (carefully since I had already stripped the head and had to file flat edges back on it).  The downside was the need to cut off the connector on the hard line side, replace the connector and flange the tubing.  I had never flanged a tube before.  Fortunately, the parts store had a tool available and some of the replacement hollow bolts.  I ended up making an extra trip because the replacement hoses didn't come with the required copper washers (which I had not noticed on the ones I took off).  After installing without them and seeing the brake fluid spray about 16 inches - it was quickly obvious what was needed.   I may go back and replace the hard lines as well - I want this to last another 5 years without major issues if possible.  I did more research on flaring hard brake lines and I may go back and do them again.  The directions the parts place gave me worked but I want to make sure it totally safe since me and the family travel using this vehicle.

The shocks in general were not very difficult (other than rust issues) but WD40, a torch, lots of effort and too much time got them off without stripping any components or damaging anything.  The Monroe SensaTrak shocks went on quickly.  My only real concern is the shock tower in the front is rusting through.  Trying to justify a welder to myself and wife - this might be a good start in that.  A small hole in the muffler indicates the adventure might continue.

I had read another blog regarding someone having trouble getting the rear shocks in place because they came compressed with a strap which needs to be cut.  Once cut, it would be difficult to get the shock in place.  I solved it by mounting the bottom of the shock and putting a small length of soft tubing on the top rod/nut.  The tube had enough length to go through the shock tower hole.  Once I cut strap binding the shock, the tube provided enough guidance to the shock as it expanded that one rear went into place without assistance and the other only needed a minor nudge.  If I need to do it again, I would probably use a more semi-rigid tube instead of soft tubing but that is what was handy.

Several laps around the neighborhood went successfully.  Truck doesn't seem to lean quite as severe now and the brakes are not mushy.

Thinking about doing the radiator hoses soon and maybe a new battery.  No indications of water pump issues at this point.  With almost 130k miles on it, I think it is holding up fairly well.

Monday, September 10, 2012

Development universe - what works for me



  1. Cygwin/X
    1. Free
    2. Works for most of my uses, mainly for xterms but also including running jvisualvm on some remote servers
    3. Not very pleasant running this over VPN/DSL. 
  2. Java 7
    1. Numerous good improvements over Java 6
    2. Could do without the security holes
    3. Missing closures and reified generics
    4. This is the day to day work horse
    5. Looking forward to Java 8 closure and lambda support.  I do have some code for which a good amount of boiler plate code should go away. 
  3. Scala 
    1. version 2.10.x - This version is where I see it finally having enough features to be of more general use.
    2. Using for some utilities and batch data maintenance.
    3. Currently have a utility using squeryl for database access.  There are a number of alternatives technologies which I have not looked closely at - this was just the first I was able to get working reasonable quicklu.
  4. Eclipse Juno (started using Eclipse at version 2)
    1. Working pretty good since I upgraded a month or so ago
    2. Have not installed all the plugins I had before but the day to day ones are there
    3. Pretty heavy memory requirement
    4. Not always the fastest response times during some activities
    5. Have CAS server and custom application running in Tomcat 7 launched in Eclipse.  Given enough memory this works but speedy.
    6. Have a number of plugins/features I tend to use
      • Pydev 
      • JDepend4Eclipse
      • GrinderStone 
      • Data Hierarchy 
      • Apache Directory Studio LDAP Browser UI 
      • Apache Directory Studio LDIF Editor 
      • Apache Directory Studio Schema Editor 
      • Eclipse memory Analyzer 
      • Subversive SVN 1.7 
      • CXF WebServices
      • PMD plugin 
      • Recently installed Eclipse Modeling Framework and Papyrus 
  5. Tomcat 7
    1. Been very stable
    2. Caught off guard initially by a cookie handling difference between version 6.  Fortunately, some configuration parameters exist which make it compatible with prior versions
    3. Been scalable enough - especially when combined with a HW load balancer. 
    4. Nice being able to deploy on shared storage (NFS) and have multiple servers use it
  6. Glassfish 3.1
    1. This is nice for some internal applications, compatibility testing and some other uses
  7. Redhat Enterprise Linux  3-6
    1. Finally got off the ancient version 3 a year or 2 ago for some legacy apps
    2. Version 4 is ok but missing a few features
    3. Version 5 has been decent.  Wishing some of the functionality required less configuration.  Most of the issues have been inflicted by things like "features" on certain hardware, default features and occasionally problems by base VM image configurations provided by another group.
    4. Just received my first version 6 VM.  Have had issues already with problem of interaction between server auth mechanism and OS/setup - profiles not being run on login.  Had a problem with NFS mount hanging when doing an "ls" on it - server group worked around it but they don't know the root cause/fix.  Apparently, disabling jumbo frames was the work-around.  I am going to try and see if the transparent huge page support works as well as the "regular huge page" support.  I saw some reports that it wasn't working all that well but without testing it is hard to tell.  If it does work, it will certainly simplify requests for servers.  A downside is I don't that this always helps because we use virtualization heavily so the underlying server can affect results significantly.  Virtual servers moving between hosts of different configurations causes obvious performance changes.
  8. WinMerge
    1. This has made working with code generated from PeopleSoft app designer almost bearable.  When 7000+ files are involved you really need tools.  Of course, other tools/utilities were needed as well to make the process fully bearable.  
  9. SQLDeveloper 3
    1. Fairly nice tool
    2. Can't beat the price - free.
    3. Gives me 80% of the functionality I need
    4. The 20% lacking means I still get to interact with our DBA group and utilize their skills (they like to be needed too and they are very skilled).
  10.  Notepad++
    1. Nice tool and free
    2. The various plugins are helpful - especially some the XML and search/replace items
  11. PeopleSoft AppDesigner - PeopleTools 8.51
    1. Not a lot of options for working in PeopleSoft.  Use it for creating Component Interfaces and a few things like that.  The fewer things done directly in PeopleSoft, the better off we are.
  12.   Visio
    1. The "normal" tool for diagramming by most in current organization.  Ok for documenting DB designs, OO Class diagrams, etc.
  13. Subversion 1.7
    1. I have used this for a number of years.  It has its warts but improves a little each year.  I have used a number of commercial systems over the years and this is better than 99% of them.  We don't really have a reason to move to a different system like Git, Mercurial, etc.  Changing source control systems based on current trends is just silly if what you have works for you and a new system is not a much better fit for your work flows/environment.  I look forward to future versions of subversion and am grateful for what the development team has done.
    2. I am looking forward to converting to version 1.8 now that it is out.  I am hoping that some aspects of merging will be a little simpler.  A lot of the time, the issue is just a lack of adequate planning on my part which is rooted in lack of project planning at the enterprise level.
  14. Gimp
    1. Not an everyday need but useful on occasion.  I think some other tools are easier to use for simple needs (mainly trying fix/update icons every so often).
  15. BIRT reporting
    1. I typically use the BIRT version of Eclipse although at one point I had to use separate installations because some things just were not compatible.  At the moment, I am able to do ad-hoc reports and development in the same IDE instance which is nice.  Not the most exciting looking reports but VERY useful for the amount of time/money invested.
  16. Artifactory
    1. This is a recent addition and appears easy to use.  I was able to hook it up with our Active Directory with very few problems.  I have another more detailed post on it but it is a nice product.  
    2. Related to this is the Jenkins Artifactory plugin - nice as well but wish some specific features were available in the free version (like generic artifact resolution).  The good news is that the Nexus Repository Connector plugin did what I needed.
  17. Jenkins (previously Hudson) build server
    1. Very useful for maintaining build stability/reliability and provides useful plugins which help automate many other batch like tasks (small support tools, metrics gathering/reporting, operational/support tasks)
    2. Have some Groovy/SQL jobs - these work but I have some issues working in a transactional manner which I am still trying to sort out.  It  may require new versions of some jars.
    3. The plugins are working together pretty well at the moment which is nice.  Was able to create a job which did some scp copies, grabbed a jar application out of Artifactory via Nexus Repository Connector Plugin and then executed it on the copied files with an email sent on failure, etc.  Really straight forward - hoping to leverage the features a  lot more soon.
  18. Ant
    1.  Currently Jenkins builds are setup to use ant.
  19. Maven
    1. Only using for dependency management within Jenkins ant builds.  May revisit this now that Artifactory is running.
  20. Various libraries/frameworks
    1. Apache DBUtils
      1. created other useful features on top of this
    2. Oracle Universal Connection Pool (UCP)
      1. Switched from the delivered Tomcat pool and haven't had a need to go back
    3.  Oracle ojdbc6.jar
    4.  hsqldb.jar
      1. Typically use for unit testing
    5. sqltool-hsqldb-2.2.5.jar
      1. Integrated with my Apache DBUtils wrapper; Allow running scripts from within our applications
    6. Apache CXF 
      1. for some minor web service integrations.  Likely use directly for other services or as part of implementation using Apache ServiceMix in the future
    7. Apache HTTPComponents client/core
    8. Log4j and a few others now
    9. Caching - both of these work fine but I am leaning toward Infinispan at the moment unless some other option I am investigating steals the show (note, created wrapper around these using cache-api-0.5-20120125.003444-41.jar).
      1. JBoss Infinispan cache
      2. Ehcache
    10. Apache Struts2
    11. struts2-jsf-plugin
    12. JSF 2
    13. mockito
    14. junit-4
      1. Unit/integration testing
    15. commons-cli
      1. Provide a command line interface to utility code
    16. Primefaces 3.x
      1. Slowly working on integrating some new functionality with this.  Fairly nice results.
    17. StringTemplate (http://www.stringtemplate.org/) - 
      1. Terence Parr has done a lot of great work over the years (especially with ANTLR).  
      2. Works perfect and easy to use
      3. I could use this for a lot more stuff but mainly use it for some email templates for some operational reporting (300k+ emails a year)
    18. JASIG CAS
      1. Slowly working to get this setup 
      2. Working on getting a distributed cache integrated for its tickets
    19. Apache Shiro
      1. This is looking like a viable solution for securing custom apps and integrating with CAS as well.
    20. Spring 2 framework
      1. Using in a lot of places but most of the apps didn't start with it so it has a grafted on feel in those apps.
      2. Very useful in combination with moving configuration outside of WAR files (to shared NFS storage) and using custom code along with "environment type" environment data (i.e. prod, test, dev, etc) as part of the bean naming/lookup process.
      3. Note, I had considered CDI briefly but my show review seems to indicate that there isn't a good way to externalize configuration outside of the WAR file and that was totally counter to the path I am taking to make operations more robust.
    21. spring-ldap
      1. Used to simply some code as we convert from DB backed auth data to AD
    22. QAS
      1. Commercial product - Address cleansing support; works well enough - more features than we currently use.
Many more items and eventually links to include.

Friday, August 24, 2012

Dream home projects

Home projects we dream(ed) about:
  • 4 season sun room
    • Had folks come out from Creative Energy  (here) to talk about a sun room .  
      • Wonderful products
      • Very good materials/engineering
      • Like the company
      • Cost way more than we are willing to pay
  • A sunny room (maybe with a pool) and retractable enclosure
    • Had a number of emails with someone from Libart (here)
      • Another GREAT product
      • Pretty good materials
      • Really like the company and attitudes
      • Cheaper than the sun room but the cost compared to benefits wasn't high enough.
  • Below-grade addition (1 or 2 story)
    • Really want to do it myself
    •  Concrete walls
      • Am thinking ICF might be the way to go
      • Also considered Superior Walls but they thought it might be pricey to use for an addition
      •  Reduce load on heat pump
      • sound deadening
    • Being "basement like", would need to somehow deal with excavating below existing foundation.  Looks possible but need to consult with someone more experienced.
  • "Screened" deck
    • Had a quote from a company for $5,000 for a pressure treated deck roughly 12x14 or so. 
    • Looking at building it myself now.
    • Result must be lower maintenance than our existing deck with looks like it was build in 1880 and not maintained.
    • Thinking about some (Brazillian?) hardwoods 
      • Importers
      • Expensive but given the right wood it could last 50+ years
      • Typically still requires an oil sealer to maintain the beautiful colors
      • Ipe was used on a number of high profile boardwalks
      • Some complaints about "sustainable" forests.  Most imports state that the forestry practices are sustainable now for their suppliers.  Not sure..
    • EzeBreeze windows
      • Here
      • Looks DIY friendly
    • Probably make concrete piers
    • This project should be in our budget and needs to get done soon one way or another.
  • Steel shingles (roofing)
    • various companies
    • Long warranties
    • Might be cost effective if I can install myself; some companies have training of some sort I think.
  • Greenhouse
  • Workshop
More to come..

Projects completed
  • Geothermal heat pump
    • Works great
    • Found a few companies/people worth their weight in diamonds
      • Delta Temp
        •  These guys are GREAT!  They fixed a VERY bad install; I wish I had started with them.
        • 804-739-5854
    • Learned a lot about bad contractors/companies and about how to evaluate better upfront
      • Town & Country mechanical - now defunct company from what I hear
      • Check that they have a license #
      • Check that they are licensed for a job of your size (i.e. Class A is better than C)
      • Check that their license isn't expired
      • Check with BBB
    • Learned that warranties are like statistics
      • lies and more lies
      • If warranty is through installer and installer goes out of business...no actual warranty exist

Thursday, August 23, 2012

OSGI to use or not to use

What are good reasons to move to an OSGI based infrastructure?
* It solves a real problem in your organization
* Your organization has the skills to support it
* Overall developer efficiency is either improved or not reduced compared to other gains
* Your application requires high availability which OSGI improves for some deployment cases
* Your infrastructure tends to support it

We have not done any prototyping with OSGI yet.  It has been on my "look into list" for a couple years now.  As I started looking into Apache ServiceMix I noted it probably is a good time to start researching more closely.

The number of books which discuss OSGI is increasing and the number of software products which either implement OSGI or work with it is increasing.  The alternative "module" system, project Jigsaw, sounds like it won't make it into Java 8 and may not cover some use cases (although I am not very knowledgeable on it).  If you have the money, maybe something like JRebel would be an alternative which helps with availability it seems.  Some recent new functionality in Apache Tomcat may improve availability as well but we have not tried it;  see the "Parallel deployment" Tomcat information here.

So far, our major need is the ability to minimize downtime especially for regular application updates which normally don't require DB changes.  Also, as we move more toward Web Services and messaging, I see a lot of potential for OSGI to help.  It should help enforce better structuring of code.  Although, if it is like many other things - what you get out is based on what you put in.  Some developers may not put the effort into using effectively.


Load balancing - Hardware or Software

Most of my load balancing experience is with F5 BigIP LTM.  This has served us well for a good number of years.  Recently I am left with a feeling that we should revisit this.  The hardware is getting older and is now showing signs of impending whole hog failure.  To top it off, the budget line items for replacements keep getting scratched off.

What are the options?
 Either more F5 equipment or maybe Citrix NetScaler or other hardware?
 On the other hand, we could go out on a limb with software load balancing.

What are the benefits of HW load balancing?
  1. Major features (of importance to us)
    1. SSL termination
    2. Load balancing
    3. Rewriting
    4. Caching
  2. Efficiency
  3. Ease of use
    1. TCL used for language
    2. Single point of SSL certificate management
  4. Support
    1. Nice to have someone you can call 
What are the downsides of HW load balancing?
  1. Cost
  2. (near) single point of failure (unless item 1 is not an issue for you)
  3. Limited features

What are potential benefits of SW load balancing?
  1. Potentially lower cost
  2. Potentially less of a "single point of failure"
  3. Handle high loads (notes below)
  4. More features available
  5. Flexibility 
There are downsides to SW load balancing.
  1. More challenging integration (potentially multiple products used to implement feature sets)
  2. Requires more in house expertise
  3. Others - work in progress
Software Options
  1. nginx
    1. http://nginx.org/
  2. lighttpd
    1. http://www.lighttpd.net/
  3. Apache Server
    1. http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html
  4. Apache Traffic Server
    1. http://trafficserver.apache.org/
  5. Squid
    1. http://www.squid-cache.org/
Need to include some supporting information sites.  Would be good to do some proof of concepts implementations and some performance benchmarking.



Central Authentication Service (JASIG CAS)

Problem:
A growing number of applications, all of which have their own authentication mechanisms.  Until recently, the authentication store consisted of a database and the user authentication data was pushed to most application authentication stores.  The amount of custom code, authentication data movement and data synchronization jobs has become a liability to our stakeholders.  Other problems include distributed logs of authentication and authorization info which makes research into potential security breaches difficult.

Solution:
Conversion of applications to use CAS instead of custom or internal authentication mechanisms.  This project is joined with a project for the creation of an enterprise active directory and an associated MS FIM identity management infrastructure.

Benefits of CAS:
* More thoroughly tested code authentication code
* Less maintenance required
* Less application specific integrations
* Some common applications have existing support
* Logs consolidated
* Can customize login for some use cases revolving around maintenance windows

I started to work on the CAS part of the project.  There were a number of hurdles to work out.  My preference was working in Eclipse and CAS is setup to build with Maven.  Instead of reinventing the wheel, I decided to try and use an Eclipse/Maven integration combined with our Subversion source repository.  This seemed like it should work without problem but then the dark side of open source came out.  The maintainers of the integration stopped work on the Subversion support for the integration and decided to only support Git. I was not impressed with the tone of things.  Here is a link to a related conversation.  Someone did fork the code and the Eclipse update site for it is here.  Very grateful for that.

I now had functional Eclipse/Maven/Subversion support but I can't say that I find the whole process very intuitive.  I was able to get the as delivered CAS code to compile and run fine and I was able to branch it with some basic additions for my organization.  I need to further externalize some of the application setup; as I indicate in another post I tend to try to leave most of the configuration in Spring based configuration files stored on NFS storage.

Started to create a Jenkins build for this.  I am trying to use the Maven build support but I am not sure how many things must get updated to build only the components I care about.  By default, it build all the components.

In-progress and near-term customizations include:
* Authentication with AD
* Initial clustering support
* Currently using Ehcache but am still thinking hard about updating it for JBoss Infinispan.  I think this is on the CAS todo list but I don't think it will be difficult to do so..  See if time permits me to work on this.
* DB backed service registry
* Multiple institution branding dynamically selected based on target IP or host.  I think branding support/infrastructure improved in CAS 3.5 but not sure of the overall state yet.

Longer term customizations:
* Ability to prevent logins by unprivileged users at certain times to allow easier testing of upgrades and other maintenance and support activities (handle per application and maybe even environment).

Also started prototyping changes to one custom application over to CAS.  This application poses some challenges since it has multiple login entry points for 2 distinct sets of users.

Need to research the Google auth/SSO support which is one of our bigger needs now.

[Update 9/9/2012]
* Active Directory authentication working well
* DB backed service registry implemented
*     Had a brief thought of trying to implement via a NoSQL solution or some other solution which would make the service registry highly available.  On a quick review, this doesn't seem to be as simple as hoped and more research into the registry indicates that availability should not typically be a problem.  Leaving this as a future enhancement idea though.
* Have an initial partial conversion of one client application. 
*     Redirection from application to CAS works. 
*     Main obstacle to review - current app has 2 different authentication entry points for 2 distinct sets of users.  I am hoping that this will not require substantial customization to support.  For now, one entry point makes the most sense to convert to CAS.  I think that a Java EE filter combined with some load balancer rewrite magic will help maintain the appropriate authentication flows.  Can't put many details in a blog on this though.

Next steps
*     Need to sort out ticket validation on client.
*     Need to work out branding; some apps centrally branded and others branded for 23+ institutions.  CAS 3.5.0 branding doesn't appear well documented at this point - am starting to review CAS code.
* Finish and deploy a Jenkins based build to a newly created dev server. 

Monday, July 2, 2012

Scala/Lift/Akka comments

I like the idea behind both Scala and Lift.  I have been going through a couple books and implemented a Scala password sync tool for a particular application as a short/mid-term solution.  I see a lot of potential in Scala and some of my concerns are being mitigated with recent releases.  I would not say that Scala is the most intuitive language but if you can find enough small utilities to write it seems to get easier.

So far I have not found a good reason to implement any tools with Lift.  I am hoping that will change at some point.  I am intrigued by some research I did which indicated that JSF could be used with Lift.  I think it will take the implementation of a few practical utilities to understand the strengths/weaknesses and usefulness of the technologies.  At this point it would be foolish to implement anything major with Scala/Lift without determining it provides a major benefit.  It will have to be a big benefit to overcome the downside which is lack of knowledge in the remainder of the team.  Last thing I need is more projects that only I can work on.

Some areas which may be a great fit for Scala 2.9/2.10 are batch related.  With the recent parallel collections support (and more recent improvements as well) this has some real potential to simplify some data intensive tasks.  We tend to have some fairly powerful systems with lots of memory and even if using virtualized servers I think Scala could simplify/speed up some solutions. 

With Scala/Akka we could setup some multi-node batch type processes which are fault tolerant and reasonably fast.  Down side is that this seems to compete with various other solutions such as Condor thus making the evaluation process more time consuming.  In the end, I would like solutions which are fault tolerant, highly available, efficient and can leverage nearly all the available resources. Even with virtualization, we have many systems which have a consistently low load but still require high-availability. Time is not on our side when it comes to evaluating possible solutions.


Web Services, Service Buses and such

 Problem:
An application sends data updated by users to several other dependent systems (via different mechanisms per target) where each has its own maintenance schedule.  The downtime of dependent systems results in lost updates or delays during manual cleanup.

Solution (attempted):
Use Apache Synapse to mediate between source and target systems.  Setup to queue messages ("dead letter queue") when target system is unavailable.  The queued messages are delivered once the target system becomes available (system retries sending message on a scheduled timer).  A reason this particular solution is attractive is its potential transparency to the application (no code change to client application).

Initial technology involved:  Synapse 2.1.0, CXF 2.4.2 client, MS FIM 2010  & IIS hosted target web service

Solution (implemented):
I ended up implementing several important additions to the client application and some minor changes. Synapse was not used at all. The first addition was the creation of functionality which verifies whether a WSDL URL returns data (no network connection failure and HTTP 200 response). This was integrated into some existing "service checking" code which runs in a regularly scheduled thread and manages the state transitions of some flags which the client application references to determine if services are available without trying to make DB or HTTP calls during each end-user activity.  The second major addition was creation of a queue/web services proxy(implements same interface as existing service) which holds data structures representing the data provided by the caller of the target web service.  The caller was modified to check the WSDL service status flag on each call and if it indicated the service was unavailable, it retrieved a Spring based singleton for the queue/proxy instance.  Also, if the client received a network IO type exception but the service flag indicated the service had been available then the client performs the call again but against the queue/proxy.  The above logic at this point results in either successful calls to the final target web service or  queued up data for future retry calls.  To process any queued up calls, the "service checking" thread (on a transition from WSDL URL down to URL up) will tell the queue based proxy to use the normal client to send the queued messages - so they either success or stay in the queue if there are service/network connectivity issues.  I may document this a bit better here later.

Notes regarding attempt with Synapse:
  1. The Synapse docs are not great and a number of classes and settings are poorly named or have various inconsistencies.  There are a good number of examples but they tend to only demonstrate very simple situations.  I found very few third party references to Synapse which makes we wonder how widely it is used in production applications.  I wish there was a greater variety of more complex examples and better documentation.  This may not be a good long term solution overall but could fit a near term tactical need if it worked.
  2. The most time consuming aspect was trying to get the CXF client to communicate with Synapse in certain required use cases.  The CXF client utilizes the exposed WSDL and did not like what was being produced by Synapse.  
    1. A better detailed explanation is:
      1. CXF did not like the default WSDL returned by Synapse - which was  really a modified version of what the live FIM web service returned.
      2. The use case requirement was that the CXF client must be able to communicate with the Synapse proxy when the final target web service is not running (implying that the target FIM web service isn't returning data for a WSDL request).
    2. The normal Synapse samples appear to require the target service to return WSDL.  There was some documentation/samples mentioning the Synapse way of producing a hard coded WSDL.
    3. The solution which was nearest to being successful was to produce a compatible WSDL using wsdl2soap and force Synapse to return that WSDL.  The problem run info though is that some imported schema references were still referencing the FIM/IIS server which would not be available in this use case.  I gave up trying to rework the WSDL to get around parts being imported.  Getting to that point was about 2 days of time and the need for a production solution was only about a week away. 
  3. It took 3-4 days to get the basic proxy with in-memory "dead letter" queue support working.  A bit embarrassing that I missed something in the docs and did not spend enough time in the samples which resulted in about 2-3 lost hours fighting an incorrect URL for the proxy in some initial prototyping.  Synapse is using Axis 2 and doesn't allow custom URL's which is somewhat annoying.  During my testing of the target service down use case, I was running into null pointer exceptions in the Synapse code.  I was able to track that down and a minor change to their source and a local build got me farther but then I was getting Synapse errors about configuration in wrong format as it prepared to reprocess messages when I would transition from the target service down to target service up.  After about 5 minutes of looking at the Synapse source producing the error I had to make the call that this was not the way to go at this time.  I may try to submit the couple changes I made back to Synapse (if my management is ok with it).  I wish this had been more straight forward.
  4. Some further googling on the general "dead letter" handling in many systems returned lots of similar reports of people trying to something similar and technology used not fully supporting it.  I know I saw some thread which indicated changes had been checked into some project (Synapse, ServiceMix or Camel??) which sounded like it would come closer to handling this use case but I don't think a release is out yet.  Don't remember which one it was at the moment.  Will have to revisit in future - I know this type of need will occur again.
Technology considered:
  • Apache ServiceMix
    • took too much time going through docs; feature rich but complex
    • Not sure we are ready to tackle OSGI
    • Plan to review further later
  • Apache Camel
    • Usable from ServiceMix or directly
    • Started looking into this but research incomplete
  • Mule ESB
    • Need to review license/user agreement further; don't want to get tied up in legal tape
      • I think this solution requires attribution to original authors (and I am all for that) but I am not sure of how to get management/legal authorization to make that type of change.  Not sure the solution warrants the pain of trying to work through the red tape. 
    • Not sure I want to maintain an Erlang install on servers
  • WebMethods
    • We own this but are trying to migrate off due to ridiculous cost for only minor benefit
    • Reasonably easy to use
  • Oracle ESB
    • We own this as well but cost is excessive
    • Resource intensive & complex
    • Forces certain architectural aspects to meet our needs which increases overall cost
  • WSO2 ESB
    • Need to review license/user agreement; don't want to get tied up in legal tape
Several others were reviewed but don't remember which ones off the top of my head.

Things to consider:
  1. At some point, we will likely implement some work-flow solutions so we should make sure that any BPEL type technology integrates cleanly into long term technology selections.  
  2.  Long term I expect large amounts of various application functionality to end up being shared.  I expect this sharing to likely be done by exposing it via web services (OK, I'll say SOA).  It doesn't make sense to reimplement the wheel or share only via frameworks. 
  3. With a future out-sourced portal in the plans, it makes a lot of sense to use web services locally and only expose the minimal interface to the provider that will host the portal.  This should reduce the security exposure by limiting secure data access to well defined API's which can be secured and audited easily.

Wednesday, May 23, 2012

JSF 2 integrated with Struts 2 research

I have been trying to move application development out of the JSP ages for some time.  JSP is well known and numerous developers are capable of supporting it but in many cases the results are underwhelming.  Given more time, fewer projects and more senior developers maybe we could do a better job of making sure consistent JSP designs/implementations are always used.  The primary goals being improving quality, reducing errors and easing maintenance/enhancements.  The project work-load is only expected to increase over the next two years due to various grants and large projects already in planning/progress. The solution I decided on is the use of a component model (JSF 2) and related component suites.  This reduces the need for lots of custom JavaScript (and expert JavaScript knowledge) and gains a lot of cross-browser testing by communities supporting many of the JSF libraries.  The existing and upcoming applications/tools/utilities are using Struts 2 and JSP as the base technology but a "Struts 2 JSF plugin" allows integration into the Struts 2 environment.  There are some posts/comments indicating that Struts 2 with JSF might have some benefits over a purely JSF implementation (authors preferring the Struts 2 navigation from what I can tell).

I previously toyed with JSF 1 for some time and was put off a bit by it and poor tool support in older versions of Eclipse.  I had been toying with IceFaces 1.8 while trying to work with Mojarra or Apache MyFaces.  I like aspects of IceFaces but fear getting sucked into needing paid support or the commercial components.  There were also problems just getting it all working.  So I let this research stagnate for a long time.

Upcoming projects inspired me to look into JSF again.  I am pleased with the JSF 2 support in Eclipse (3.7.2) now.  I made a somewhat informed decision to switch over to PrimeFaces 3.2 from IceFaces.  I also switched over to Mojarra.  I am a fan of Glassfish (but use Tomcat more) so having this consistency (by using Mojarra) makes sense for now.  I had performed some research into RichFaces, IceFaces 2/3, Apache Trinidad and some others before settling on PrimeFaces.  It appears to have a good selection of components, reasonable performance and a helpful community.  There isn't anything fundamentally wrong with the others but this seemed like a better fit and initial prototyping was successful earlier than with other options. 

Even though integrating new functionality using JSF into a moderate sized Struts 2/JSP application isn't trivial, it is going reasonably well.  I hope to also replace some limited functionality with JSF over the next few weeks.  This will provide an opportunity to target internal-only users with the result of the much more refined functionality and help justify future updates which are public facing.

My only major disappointment is a lack of comprehensive examples which show small scale but somewhat realistic implementations specifically for JSF 2.  There is lots of JSF 1 information which doesn't reflect current best practice for JSF 2.  I expect to be working in the debugger and reviewing generated HTML for a while until I figure out good ways to promote efficient results.

As a side note, I also took a stab at integrating JBoss Seam but that wasn't going as well as hoped.  Again, there are some good things there but I prefer something which is an improvement and easy to get functional quickly.  I was spending way too much time trying to figure out Seam issues rather than generating useful work results.  If we ever have to convert to JBoss then this would likely be an easier move but in the foreseeable future Tomcat 7 is the best fit with JSF 2/Struts 2/PrimeFaces 3 with some Glassfish 3 thrown in here and there.  I dumped WebLogic - Swiss army knife which is bloated, expensive and hard to integrate into our overall infrastructure. 

Thursday, May 17, 2012

Maimer of trees - taking after mom

My daughter enjoys riding dirt bikes and that is something we share.  She has certainly improved over time but she continues to give us some memorable stories.

The first story is about her somehow clipping the neighbors large tree as she tried to navigate between it and our garden.  The result was her basically going over the handlebars of her little 50cc Suzuki.  She was sore and bruised for a long while but she got back on shortly after that and continues to ride.  I am proud of her for not giving it up after that - many kids would have.  Now when she gets hurt in other activities we use this as a way to get a relative severity level "which hurt worse this or when you hit the tree".  So far the tree is still the winner - hoping and praying it stays that way. 

We now have another tree story.  We planted 7 fruit trees this spring - 3 apple, 2 peach and 2 cherry.  We placed the peach trees near the garden, the cherry trees out front and the apple trees along our road.  We thought the peach trees died since they had no signs of leafing out.  One day I was working out in the driveway and watching my daughter go around the house on her motorcycle.  At a certain point I realized that she had not come around and I could not hear her.  I admit to a panicked feeling but as I started around the house I saw her walking towards me while taking her helmet off.  She didn't look hurt and then I saw it.  The dirt bike laying in the grass and one of the peach trees ripped out the ground.  Her first words were "sorry".  I was so thankful that she wasn't hurt!  She was concerned that I would be mad about the tree.  I still don't know exactly what happened but she sometimes goes a little too fast and probably wasn't quite looking where she should have been.  It was a lesson learned the hard way.

A bright note is that I put the "dead" tree back in the ground and wrapped its "boo boos" with burlap and told my daughter that it seemed dead already.  The funny thing is that the accident was over a month ago and we just noticed leaves finally sprouting from that tree.  It is still wrapped in burlap where the bark got scrapped/ripped down.  No idea of what the end result will be but we are claiming that the tree is "fighting through damage".

Oh, and the "taking after mom" part of this posts subject is because my wife backed our truck into some bushes one time.  No damage to them or us but for a while we nick named her "bush killer".  I'm thinking I should be the one to teach the kids to drive when the time comes.. :)

In the mean time, I should find more time to play "follow the leader" with my daughter on dirt bikes and avoid getting close to the large immovable objects.


Arduino microcontrollers are awesome

I bought an Arduino Uno "for my son" at Christmas ..  ok, maybe it was for me as well.  I picked up a couple books and worked through some of the projects with my son.  Really neat stuff.

Biggest problems are time, the fact I need to get bifocals so I can see the parts well enough and keeping it interesting for son. 

I would like us to work on some flying projects like a from scratch multi-copter (either a tricopter or quadcopter).

Examples:
  http://www.blueskyrc.com/index.php?main_page=product_info&products_id=16
  http://aeroquad.com/


Winter Motorcycling

Held Freezer gloves rock.  It is warm now but I went through most of the winter using some new Held Freezer gloves.  They kept me reasonable warm for my 20 mile commute even in 15-20 degree F weather.  They have mostly replaced my Hippo Hands.  The gloves are a bit bulky but I mostly adjusted to it with use - just requires a little more attention to use turn signals and light switch.

The Hippo Hands did a fine job preserving the warmth by keeping the wind and occasionally precipitation off my hands. 

I will keep the Hippo Hands around - they may make things more comfortable if I decide to go out below 15 degrees.  The downsides of the Hippo Hands are bulk during riding (hard to find light switch and turn signal switch sometimes), reduce fuel economy a bit I think and requires  more storage than gloves when not in use.

I could have gone with heated gear but am not convinced of long term reliability and when combined with the higher cost the gloves seemed like a better investment for now.  Of course, I did see some web sites talking about making your own heated gear - almost tried it.  Maybe will another time but I will likely install a couple power sockets first.


MS FIM 2010 - the good and the bad

Identity management is an area which has been growing in importance for some time.  There are numerous commercial products and a few decent open source options.  This post is mainly to document the issues we encountered with MS FIM 2010.
  • Support
    • There is very little FIM knowledge available.  You should expect to use MS consultants and because of the small number that know FIM you may encounter delays.  
    • There is some online information available and I think there is now a course for FIM but there are substantial training needs just to understand the basics. 
    • Premier support and MS consulting do try to do the best they can
  • Architecture
    • It requires SQLServer
      • If you are not a SQLServer shop and are not prepared to support it then this can be a substantial issue.
    • It is complex 
      • Our implementation has various items coded in C# (by a MS consultant), setup in FIM portal and setups in FIM sync service.  I would not consider our identity needs very complex but we do have lots of information to manage.
      • Do you need high availability?  Are you prepared to implement/manage/support SQLServer clustering? 
    • Certain processes cannot run concurrently and generate various problems if they do collide.  This turns into a juggling act to handle peak times and provide a fast turn-around on additions/changes.  In the end, large implementations likely have to decide on trade-offs whether they want to or not.
    • It is a somewhat painful fit into our disaster recovery infrastructure.  The root issue may be how the licensing code works in the overall product. In the end, we are not able to do a real DR test and can only document the steps which we think are required.
  • Performance/scalability
    • If you have *lots* of users then performance can be an issue
      • Initial loading of data for hundreds of thousands to around a million users is pretty slow
      • If you experience a high rate of provisioning or updating users then you may have some unpredictably long jobs
    •  More server cores != improved performance.  Basically single threaded processing in certain areas of the application from what I can tell.
    • We have an ERP as a source which has thousands of tables in the system catalog.  FIM doesn't handle this well at all - likely trying to populate drop down lists and such.  We ended up routing the ERP data through another DB system (provide views to the target tables over links) so the size of the DB catalog didn't cause problems. 
    • The normal response to certain issues is "full sync".  This is sort of the "reboot" equivalent to OS problems.  With a large implementation, this will likely result in processing delays.
    •  
  • Stability/Quality
    • We encountered some errors early on which required hacks to get around and required substantial time for MS to provide hot fixes
    • We continue to have some issues which have no obvious source in production and have not been reproduced in non-production.  It is possible that currently unapplied updates and hot fixes may remedy the issues but we are in effect a guinea pig for some updates/fixes.

  • Ease of use.  
    • The goal of a GUI is to improve ease of use but when there are items that if selected (or not selected) at the right time can result in massive delays as it tries to process the "mistake" aspect of a request. 
    • The reporting capabilities are horrendous and MS knows it.  Maybe it will improv in 2010 R2 but I will wait and see.  If you are trying to research large number of errors you will find it beyond frustrating that you can't export a list of errors.  In the best case, you have to perform a good number of mouse clicks per entry to cut/paste what you need into Excel, etc.  In the worst cased, there are places where all we could do was perform a screen shot or manually transcribe data because you couldn't select it for cut/paste.  Error  messages are poor and misleading in some cases.
    • Validation of data is painful.  You are not given (SQLServer) schema information for FIM so if your primary identity sources are a database you cannot simply write utilities to compare sources.  We are implementing custom utilities to try and compare one of our data sources to some data that ends up in active directory.  So we are able to compare the beginning and end but cannot see intermediate results when they don't match.
  • Organization/Corporate culture
    • If scope is increased above what is initially planned and you have hard deadlines - you likely will regret choices which end up rushed and lack enough forethought.
    • If you don't have some (mostly) dedicated staff then expect a lot of unhappiness
    • If customers(users of the data) are not fully involved early on to help determine the data needs and validate a data dictionary - expect thrashing later.  

Related commercial tools:
  • http://www.oracle.com/us/products/middleware/identity-management/overview/index.html
  •  http://www.quest.com/identity-management/
  •  http://www.bmc.com/products/offering/Identity-Management.html
  •  http://www-01.ibm.com/software/tivoli/products/identity-mgr/
  •  https://www.pingidentity.com/products/pingone/
  •  http://www.openiam.com
  •  
Related open source tools:
  • http://www.forgerock.org/
  • http://developers.sun.com/identity/
  • http://shibboleth.net/
  • http://www.jasig.org/cas
  • http://www.josso.org
  • http://shiro.apache.org/
  • http://static.springsource.org/spring-security/site/
  • http://openid.net/
  • http://www.sourceid.org/
As a note to self, I should keep an eye on the items at forgerock.org and maybe consider a proof of concept in case FIM doesn't work out.



Monday, May 14, 2012

Gardening - The Hoss

I like to garden and my family certainly likes the fresh vegetables.  Our garden has been steadily growing in size and is around 25 x 50 feet now.  That may sound like a good size but it never seems large enough as we find new things to plant.  A downside to the garden is there is usually a point in the summer where we can't weed for lack of time and we typically don't keep up with the weeds as well as we would like.  The result is usually a amazon like environment covered in morning glory and many other annoying plants.  To help combat the weeds and also make better use of our existing garden space, I purchased a Hoss double wheel cultivator/hoe.

http://easydigging.com/Garden_Cultivator/wheel_hoe_push_plow.html

I have had it for about a week and am quite pleased.  It is lighter weight than I expected but with care should last a long time.  I tried out the cultivator teeth and oscillating hoe so far - will try out the sweeps another day.  The cultivator teeth work reasonable well in our clay soil (at least where we have removed most of the rocks from over the years).  The hoe worked well when I tried it (a day or so after rain).  It seems that with regular use they should keep the weeds down in the main aisles and is MUCH faster than a plain hand held hoe.  My comment related to improved use of space is due to the fact we can put an extra row between existing rows (resulting in a row spacing of about 18 inches).  Previously we had rows spaced about 36 inches apart to allow the tiller to run between them.

Another benefit of the Hoss is some good physical exercise - something sorely lacking in my day job.

All in all this was a very good purchase and I wish I had done it sooner.  I may consider the seeder attachment and furrowing/hilling plow next summer.  The plow will depend on how our first attempt at potatoes turns out this summer.  The seeder is not really a "need" but may prevent some unneeded extra back pain - will have to see what other bills crop up next summer. 




Tuesday, May 8, 2012

Jenkins CI SQLTool Integration idea

I find that Jenkins works well for simple batch tasks but it would certainly be nice if there was an integration with the HSQL SQLTool so that a "SQLTool" job could be defined and maybe it could get connections from JNDI.  Instead of writing Groovy code to execute queries, we could then define the job as a SQLTool compatible script (either define script directly in Jenkins or specify file).  This would cover a good number of our utility batch jobs.

No time at the moment though.  Someday.

Mazda Protege 5 window mechanism replacement and prayers

Have you ever had something break at a bad time and end up getting all stressed out about it.  Well, this happens to me even though the bible says we should not be anxious and should pray on all things (Philippians 4:6).  

This is something from last summer.  I have thought about blogging on it for some time but realistically don't have much time to blog normally.  Today is my catch up.

One day I went out to our car and realized that the window was down even though I knew it wasn't left that way.  Of course the first thought was that someone broke in but on a quick look it was clear that the electric window lift mechanism broke causing the window to drop into the door.

Of course, I did not want to take the car to the dealer (hate spending 2x the money for something that I can do with a little encouragement).  On the same note, there is no room in the garage so rain would be a real problem.  Well after a pow-wow with my wonderful wife, I decided to try to fix it myself but first got out the plastic sheeting and duct tape.  I took the door panel apart and went googling for replacement parts.  On a popular site, I ordered what I though was the correct part which arrived in about a week.  It didn't take long to realize this was not the part for our vehicle year.  It turned out that I ordered the wrong part because I didn't notice a warning on site providing the part.  After fussing at myself for a good amount of time I finally was able to find the correct part and get it ordered.  Another week went by.  Things are getting a bit more stressed as I get concerned about the weather and the duct tape which was slowly slipping.

Finally the part arrives and my son and I spend about 4-5 hours getting the part replaced (which was not straight forward since the wiring was different).  I ended up googling many sites until I found one indicating that simply ignoring several wires was ok and that polarity was apparently an issue.  Remember that not everything on the internet is correct though!  After redoing the wiring 2-3 times (so the window went down instead of up when pressing the down button) and repeatedly taking the door panel off about 5 times (testing, forgot to reattach parts or figured out why I had "spare parts") we finally had it all working.  Yeah!

Anyways, the prayer part of the subject is that I should have been praying about even things like this from the start instead of just starting now.. where now I pray in thankfulness to Jesus for the time I had working together with my son that day.  I wish that more than my hind sight was 20/20.  Thankful nonetheless though.



Monday, May 7, 2012

DL650 VStrom Motorcycle maintenance success

I tend to like to attempt to fix things myself and after 26k miles it was time to replace the front and rear sprockets, chain, spark plugs and air filter.  I ride year round down to about 15 degrees as long as it isn't icy or expecting much rain.  This is mainly to save gas and reduce miles on our primary vehicles - I don't enjoy riding around with distracted drivers everywhere.

I had previously replaced the spark plugs and air filter without major incident but the rest was new.  Not having a motorcycle lift but knowing that appropriate tools tend to make or break activities like this - I decided to install an electric hoist in the garage.  Floor space is getting sparse and using a hoist allows me to remove both wheels for things like tire replacement or simply to ease chain maintenance.  The hoist worked very well in general although there was a bit of sway to deal with.

Since I am trying to be somewhat green, I replaced the throw away air filter with a washable K&N version which is supposed to flow more air.  Not sure whether the touted air flow will make a difference but not having to throw away more stuff made it a worthwhile investment.

I replaced the spark plugs with Iridium versions which I hope last longer and may provide a small power boost.  The previous plugs actually still look to be in pretty good shape.

The sprocket replacements were very straight forward.  I switched the front sprocket from 15 teeth to 16 and the rear from 47 to 44 teeth.  This reduced the RPM down to around 4300 @ an indicated 65MPH from just about 5000 RPM.  It is more relaxed and I hope to improve the fuel economy a bit.

The chain replacement was the more tedious item.  I have never worked with a "continuous chain" before.  This is a DID X link chain where you must press & rivet the link on (versus simply installing a clip on the side of the master link).  A bolt cutter worked nicely to remove the old chain.  I ended up ordering the DID KM501E tool since other methods discussed in various blogs made me a bit nervous.  The results with the appropriate tools are quite nice.

On top of the normal maintenance, I decided to install the Kouba links which have been laying around the garage for about 4 years.  These links reduce height of the read end by about 1 1/8 inches.  I lowered the front end about .9 inches.  I can now touch nearly flat footed which is nice but almost odd feeling after so long.  I wish I did this a long time ago.

So far, I have not noticed much difference in power after the gearing change.  It could be the better spark plugs and air filter made a small difference but I don't have any data to back that up.

My only real complaint through all this is that the plastic "rivets" which attach parts of the fairing together are very annoying on a good day.

Next maintenance will involve new brake pads which I noticed, during all this, are getting a bit thin.

I do want to credit the various contributors over at http://www.stromtrooper.com for great information which gave me enough confidence to tackle these tasks (years later than original posts in some cases).

UPDATE 2012/08/23:
The above changes have resulted in an average 65+ MPG with a high of 67.7 MPG compared to an average around 58-60MPG.  That gives me a range of around 380+ miles on a tank of gas.

Have a slight (single) click when braking which I haven't tracked down yet - seems likely related to either the lowering with the Kouba links or maybe a bolt needs to be slightly tighter.