Wednesday, May 23, 2012

JSF 2 integrated with Struts 2 research

I have been trying to move application development out of the JSP ages for some time.  JSP is well known and numerous developers are capable of supporting it but in many cases the results are underwhelming.  Given more time, fewer projects and more senior developers maybe we could do a better job of making sure consistent JSP designs/implementations are always used.  The primary goals being improving quality, reducing errors and easing maintenance/enhancements.  The project work-load is only expected to increase over the next two years due to various grants and large projects already in planning/progress. The solution I decided on is the use of a component model (JSF 2) and related component suites.  This reduces the need for lots of custom JavaScript (and expert JavaScript knowledge) and gains a lot of cross-browser testing by communities supporting many of the JSF libraries.  The existing and upcoming applications/tools/utilities are using Struts 2 and JSP as the base technology but a "Struts 2 JSF plugin" allows integration into the Struts 2 environment.  There are some posts/comments indicating that Struts 2 with JSF might have some benefits over a purely JSF implementation (authors preferring the Struts 2 navigation from what I can tell).

I previously toyed with JSF 1 for some time and was put off a bit by it and poor tool support in older versions of Eclipse.  I had been toying with IceFaces 1.8 while trying to work with Mojarra or Apache MyFaces.  I like aspects of IceFaces but fear getting sucked into needing paid support or the commercial components.  There were also problems just getting it all working.  So I let this research stagnate for a long time.

Upcoming projects inspired me to look into JSF again.  I am pleased with the JSF 2 support in Eclipse (3.7.2) now.  I made a somewhat informed decision to switch over to PrimeFaces 3.2 from IceFaces.  I also switched over to Mojarra.  I am a fan of Glassfish (but use Tomcat more) so having this consistency (by using Mojarra) makes sense for now.  I had performed some research into RichFaces, IceFaces 2/3, Apache Trinidad and some others before settling on PrimeFaces.  It appears to have a good selection of components, reasonable performance and a helpful community.  There isn't anything fundamentally wrong with the others but this seemed like a better fit and initial prototyping was successful earlier than with other options. 

Even though integrating new functionality using JSF into a moderate sized Struts 2/JSP application isn't trivial, it is going reasonably well.  I hope to also replace some limited functionality with JSF over the next few weeks.  This will provide an opportunity to target internal-only users with the result of the much more refined functionality and help justify future updates which are public facing.

My only major disappointment is a lack of comprehensive examples which show small scale but somewhat realistic implementations specifically for JSF 2.  There is lots of JSF 1 information which doesn't reflect current best practice for JSF 2.  I expect to be working in the debugger and reviewing generated HTML for a while until I figure out good ways to promote efficient results.

As a side note, I also took a stab at integrating JBoss Seam but that wasn't going as well as hoped.  Again, there are some good things there but I prefer something which is an improvement and easy to get functional quickly.  I was spending way too much time trying to figure out Seam issues rather than generating useful work results.  If we ever have to convert to JBoss then this would likely be an easier move but in the foreseeable future Tomcat 7 is the best fit with JSF 2/Struts 2/PrimeFaces 3 with some Glassfish 3 thrown in here and there.  I dumped WebLogic - Swiss army knife which is bloated, expensive and hard to integrate into our overall infrastructure. 

Thursday, May 17, 2012

Maimer of trees - taking after mom

My daughter enjoys riding dirt bikes and that is something we share.  She has certainly improved over time but she continues to give us some memorable stories.

The first story is about her somehow clipping the neighbors large tree as she tried to navigate between it and our garden.  The result was her basically going over the handlebars of her little 50cc Suzuki.  She was sore and bruised for a long while but she got back on shortly after that and continues to ride.  I am proud of her for not giving it up after that - many kids would have.  Now when she gets hurt in other activities we use this as a way to get a relative severity level "which hurt worse this or when you hit the tree".  So far the tree is still the winner - hoping and praying it stays that way. 

We now have another tree story.  We planted 7 fruit trees this spring - 3 apple, 2 peach and 2 cherry.  We placed the peach trees near the garden, the cherry trees out front and the apple trees along our road.  We thought the peach trees died since they had no signs of leafing out.  One day I was working out in the driveway and watching my daughter go around the house on her motorcycle.  At a certain point I realized that she had not come around and I could not hear her.  I admit to a panicked feeling but as I started around the house I saw her walking towards me while taking her helmet off.  She didn't look hurt and then I saw it.  The dirt bike laying in the grass and one of the peach trees ripped out the ground.  Her first words were "sorry".  I was so thankful that she wasn't hurt!  She was concerned that I would be mad about the tree.  I still don't know exactly what happened but she sometimes goes a little too fast and probably wasn't quite looking where she should have been.  It was a lesson learned the hard way.

A bright note is that I put the "dead" tree back in the ground and wrapped its "boo boos" with burlap and told my daughter that it seemed dead already.  The funny thing is that the accident was over a month ago and we just noticed leaves finally sprouting from that tree.  It is still wrapped in burlap where the bark got scrapped/ripped down.  No idea of what the end result will be but we are claiming that the tree is "fighting through damage".

Oh, and the "taking after mom" part of this posts subject is because my wife backed our truck into some bushes one time.  No damage to them or us but for a while we nick named her "bush killer".  I'm thinking I should be the one to teach the kids to drive when the time comes.. :)

In the mean time, I should find more time to play "follow the leader" with my daughter on dirt bikes and avoid getting close to the large immovable objects.


Arduino microcontrollers are awesome

I bought an Arduino Uno "for my son" at Christmas ..  ok, maybe it was for me as well.  I picked up a couple books and worked through some of the projects with my son.  Really neat stuff.

Biggest problems are time, the fact I need to get bifocals so I can see the parts well enough and keeping it interesting for son. 

I would like us to work on some flying projects like a from scratch multi-copter (either a tricopter or quadcopter).

Examples:
  http://www.blueskyrc.com/index.php?main_page=product_info&products_id=16
  http://aeroquad.com/


Winter Motorcycling

Held Freezer gloves rock.  It is warm now but I went through most of the winter using some new Held Freezer gloves.  They kept me reasonable warm for my 20 mile commute even in 15-20 degree F weather.  They have mostly replaced my Hippo Hands.  The gloves are a bit bulky but I mostly adjusted to it with use - just requires a little more attention to use turn signals and light switch.

The Hippo Hands did a fine job preserving the warmth by keeping the wind and occasionally precipitation off my hands. 

I will keep the Hippo Hands around - they may make things more comfortable if I decide to go out below 15 degrees.  The downsides of the Hippo Hands are bulk during riding (hard to find light switch and turn signal switch sometimes), reduce fuel economy a bit I think and requires  more storage than gloves when not in use.

I could have gone with heated gear but am not convinced of long term reliability and when combined with the higher cost the gloves seemed like a better investment for now.  Of course, I did see some web sites talking about making your own heated gear - almost tried it.  Maybe will another time but I will likely install a couple power sockets first.


MS FIM 2010 - the good and the bad

Identity management is an area which has been growing in importance for some time.  There are numerous commercial products and a few decent open source options.  This post is mainly to document the issues we encountered with MS FIM 2010.
  • Support
    • There is very little FIM knowledge available.  You should expect to use MS consultants and because of the small number that know FIM you may encounter delays.  
    • There is some online information available and I think there is now a course for FIM but there are substantial training needs just to understand the basics. 
    • Premier support and MS consulting do try to do the best they can
  • Architecture
    • It requires SQLServer
      • If you are not a SQLServer shop and are not prepared to support it then this can be a substantial issue.
    • It is complex 
      • Our implementation has various items coded in C# (by a MS consultant), setup in FIM portal and setups in FIM sync service.  I would not consider our identity needs very complex but we do have lots of information to manage.
      • Do you need high availability?  Are you prepared to implement/manage/support SQLServer clustering? 
    • Certain processes cannot run concurrently and generate various problems if they do collide.  This turns into a juggling act to handle peak times and provide a fast turn-around on additions/changes.  In the end, large implementations likely have to decide on trade-offs whether they want to or not.
    • It is a somewhat painful fit into our disaster recovery infrastructure.  The root issue may be how the licensing code works in the overall product. In the end, we are not able to do a real DR test and can only document the steps which we think are required.
  • Performance/scalability
    • If you have *lots* of users then performance can be an issue
      • Initial loading of data for hundreds of thousands to around a million users is pretty slow
      • If you experience a high rate of provisioning or updating users then you may have some unpredictably long jobs
    •  More server cores != improved performance.  Basically single threaded processing in certain areas of the application from what I can tell.
    • We have an ERP as a source which has thousands of tables in the system catalog.  FIM doesn't handle this well at all - likely trying to populate drop down lists and such.  We ended up routing the ERP data through another DB system (provide views to the target tables over links) so the size of the DB catalog didn't cause problems. 
    • The normal response to certain issues is "full sync".  This is sort of the "reboot" equivalent to OS problems.  With a large implementation, this will likely result in processing delays.
    •  
  • Stability/Quality
    • We encountered some errors early on which required hacks to get around and required substantial time for MS to provide hot fixes
    • We continue to have some issues which have no obvious source in production and have not been reproduced in non-production.  It is possible that currently unapplied updates and hot fixes may remedy the issues but we are in effect a guinea pig for some updates/fixes.

  • Ease of use.  
    • The goal of a GUI is to improve ease of use but when there are items that if selected (or not selected) at the right time can result in massive delays as it tries to process the "mistake" aspect of a request. 
    • The reporting capabilities are horrendous and MS knows it.  Maybe it will improv in 2010 R2 but I will wait and see.  If you are trying to research large number of errors you will find it beyond frustrating that you can't export a list of errors.  In the best case, you have to perform a good number of mouse clicks per entry to cut/paste what you need into Excel, etc.  In the worst cased, there are places where all we could do was perform a screen shot or manually transcribe data because you couldn't select it for cut/paste.  Error  messages are poor and misleading in some cases.
    • Validation of data is painful.  You are not given (SQLServer) schema information for FIM so if your primary identity sources are a database you cannot simply write utilities to compare sources.  We are implementing custom utilities to try and compare one of our data sources to some data that ends up in active directory.  So we are able to compare the beginning and end but cannot see intermediate results when they don't match.
  • Organization/Corporate culture
    • If scope is increased above what is initially planned and you have hard deadlines - you likely will regret choices which end up rushed and lack enough forethought.
    • If you don't have some (mostly) dedicated staff then expect a lot of unhappiness
    • If customers(users of the data) are not fully involved early on to help determine the data needs and validate a data dictionary - expect thrashing later.  

Related commercial tools:
  • http://www.oracle.com/us/products/middleware/identity-management/overview/index.html
  •  http://www.quest.com/identity-management/
  •  http://www.bmc.com/products/offering/Identity-Management.html
  •  http://www-01.ibm.com/software/tivoli/products/identity-mgr/
  •  https://www.pingidentity.com/products/pingone/
  •  http://www.openiam.com
  •  
Related open source tools:
  • http://www.forgerock.org/
  • http://developers.sun.com/identity/
  • http://shibboleth.net/
  • http://www.jasig.org/cas
  • http://www.josso.org
  • http://shiro.apache.org/
  • http://static.springsource.org/spring-security/site/
  • http://openid.net/
  • http://www.sourceid.org/
As a note to self, I should keep an eye on the items at forgerock.org and maybe consider a proof of concept in case FIM doesn't work out.



Monday, May 14, 2012

Gardening - The Hoss

I like to garden and my family certainly likes the fresh vegetables.  Our garden has been steadily growing in size and is around 25 x 50 feet now.  That may sound like a good size but it never seems large enough as we find new things to plant.  A downside to the garden is there is usually a point in the summer where we can't weed for lack of time and we typically don't keep up with the weeds as well as we would like.  The result is usually a amazon like environment covered in morning glory and many other annoying plants.  To help combat the weeds and also make better use of our existing garden space, I purchased a Hoss double wheel cultivator/hoe.

http://easydigging.com/Garden_Cultivator/wheel_hoe_push_plow.html

I have had it for about a week and am quite pleased.  It is lighter weight than I expected but with care should last a long time.  I tried out the cultivator teeth and oscillating hoe so far - will try out the sweeps another day.  The cultivator teeth work reasonable well in our clay soil (at least where we have removed most of the rocks from over the years).  The hoe worked well when I tried it (a day or so after rain).  It seems that with regular use they should keep the weeds down in the main aisles and is MUCH faster than a plain hand held hoe.  My comment related to improved use of space is due to the fact we can put an extra row between existing rows (resulting in a row spacing of about 18 inches).  Previously we had rows spaced about 36 inches apart to allow the tiller to run between them.

Another benefit of the Hoss is some good physical exercise - something sorely lacking in my day job.

All in all this was a very good purchase and I wish I had done it sooner.  I may consider the seeder attachment and furrowing/hilling plow next summer.  The plow will depend on how our first attempt at potatoes turns out this summer.  The seeder is not really a "need" but may prevent some unneeded extra back pain - will have to see what other bills crop up next summer. 




Tuesday, May 8, 2012

Jenkins CI SQLTool Integration idea

I find that Jenkins works well for simple batch tasks but it would certainly be nice if there was an integration with the HSQL SQLTool so that a "SQLTool" job could be defined and maybe it could get connections from JNDI.  Instead of writing Groovy code to execute queries, we could then define the job as a SQLTool compatible script (either define script directly in Jenkins or specify file).  This would cover a good number of our utility batch jobs.

No time at the moment though.  Someday.

Mazda Protege 5 window mechanism replacement and prayers

Have you ever had something break at a bad time and end up getting all stressed out about it.  Well, this happens to me even though the bible says we should not be anxious and should pray on all things (Philippians 4:6).  

This is something from last summer.  I have thought about blogging on it for some time but realistically don't have much time to blog normally.  Today is my catch up.

One day I went out to our car and realized that the window was down even though I knew it wasn't left that way.  Of course the first thought was that someone broke in but on a quick look it was clear that the electric window lift mechanism broke causing the window to drop into the door.

Of course, I did not want to take the car to the dealer (hate spending 2x the money for something that I can do with a little encouragement).  On the same note, there is no room in the garage so rain would be a real problem.  Well after a pow-wow with my wonderful wife, I decided to try to fix it myself but first got out the plastic sheeting and duct tape.  I took the door panel apart and went googling for replacement parts.  On a popular site, I ordered what I though was the correct part which arrived in about a week.  It didn't take long to realize this was not the part for our vehicle year.  It turned out that I ordered the wrong part because I didn't notice a warning on site providing the part.  After fussing at myself for a good amount of time I finally was able to find the correct part and get it ordered.  Another week went by.  Things are getting a bit more stressed as I get concerned about the weather and the duct tape which was slowly slipping.

Finally the part arrives and my son and I spend about 4-5 hours getting the part replaced (which was not straight forward since the wiring was different).  I ended up googling many sites until I found one indicating that simply ignoring several wires was ok and that polarity was apparently an issue.  Remember that not everything on the internet is correct though!  After redoing the wiring 2-3 times (so the window went down instead of up when pressing the down button) and repeatedly taking the door panel off about 5 times (testing, forgot to reattach parts or figured out why I had "spare parts") we finally had it all working.  Yeah!

Anyways, the prayer part of the subject is that I should have been praying about even things like this from the start instead of just starting now.. where now I pray in thankfulness to Jesus for the time I had working together with my son that day.  I wish that more than my hind sight was 20/20.  Thankful nonetheless though.



Monday, May 7, 2012

DL650 VStrom Motorcycle maintenance success

I tend to like to attempt to fix things myself and after 26k miles it was time to replace the front and rear sprockets, chain, spark plugs and air filter.  I ride year round down to about 15 degrees as long as it isn't icy or expecting much rain.  This is mainly to save gas and reduce miles on our primary vehicles - I don't enjoy riding around with distracted drivers everywhere.

I had previously replaced the spark plugs and air filter without major incident but the rest was new.  Not having a motorcycle lift but knowing that appropriate tools tend to make or break activities like this - I decided to install an electric hoist in the garage.  Floor space is getting sparse and using a hoist allows me to remove both wheels for things like tire replacement or simply to ease chain maintenance.  The hoist worked very well in general although there was a bit of sway to deal with.

Since I am trying to be somewhat green, I replaced the throw away air filter with a washable K&N version which is supposed to flow more air.  Not sure whether the touted air flow will make a difference but not having to throw away more stuff made it a worthwhile investment.

I replaced the spark plugs with Iridium versions which I hope last longer and may provide a small power boost.  The previous plugs actually still look to be in pretty good shape.

The sprocket replacements were very straight forward.  I switched the front sprocket from 15 teeth to 16 and the rear from 47 to 44 teeth.  This reduced the RPM down to around 4300 @ an indicated 65MPH from just about 5000 RPM.  It is more relaxed and I hope to improve the fuel economy a bit.

The chain replacement was the more tedious item.  I have never worked with a "continuous chain" before.  This is a DID X link chain where you must press & rivet the link on (versus simply installing a clip on the side of the master link).  A bolt cutter worked nicely to remove the old chain.  I ended up ordering the DID KM501E tool since other methods discussed in various blogs made me a bit nervous.  The results with the appropriate tools are quite nice.

On top of the normal maintenance, I decided to install the Kouba links which have been laying around the garage for about 4 years.  These links reduce height of the read end by about 1 1/8 inches.  I lowered the front end about .9 inches.  I can now touch nearly flat footed which is nice but almost odd feeling after so long.  I wish I did this a long time ago.

So far, I have not noticed much difference in power after the gearing change.  It could be the better spark plugs and air filter made a small difference but I don't have any data to back that up.

My only real complaint through all this is that the plastic "rivets" which attach parts of the fairing together are very annoying on a good day.

Next maintenance will involve new brake pads which I noticed, during all this, are getting a bit thin.

I do want to credit the various contributors over at http://www.stromtrooper.com for great information which gave me enough confidence to tackle these tasks (years later than original posts in some cases).

UPDATE 2012/08/23:
The above changes have resulted in an average 65+ MPG with a high of 67.7 MPG compared to an average around 58-60MPG.  That gives me a range of around 380+ miles on a tank of gas.

Have a slight (single) click when braking which I haven't tracked down yet - seems likely related to either the lowering with the Kouba links or maybe a bolt needs to be slightly tighter.






Application Server Deployment

What are drivers of an organizations deployment strategy? 
 The ones that come to mind are
  1. Specific technology investments such as WebLogic (think of cluster deployments).
  2. Generic process based on common technology (think of Tomcat Manager)
  3. Custom processes, infrastructure and configuration to meet specific needs
Which is appropriate? I think the answer is "It depends".

If an organization has an investment in WebLogic or other costly technology and the deployment capabilities meet it's needs then maybe there isn't a need to do anything different than the product dictated solution. Using WebLogic or similar solution likely means there are multiple tiers and maybe load balancing of one or more tiers over multiple servers.  That type of environment is substantially more complex to setup and maintain than a single tier/server solution.  If the organization is staffed appropriately and has the proper training with the specific tools then doing something different seems unwise unless there is a good reason.  The main downside to this solution is cost - for both the tool and either training or hiring of the appropriate staff.  If the application hosted is critical then the costs multiply as well since you likely would want multiple staff trained to provide coverage during vacations or other unplanned emergencies.

Many small/mid-size organizations likely use Apache Tomcat or something similar.  There are reasonably well documented install and deployment procedures available on-line or in various books.  For organizations with smaller or less trained IT departments this is certainly a workable solution.  Most developers are more than capable of getting a default install up and running and managing it.  Applications can be scaled by load balancing multiple servers which starts to increase the complexity but is likely manageable at smaller scales.  After a certain point, scaling via adding more load balanced servers and using the default deployment procedures starts to become fragile.  Examples of the result of the fragility might include failed deployments, individual servers return wrong results, servers accessing wrong resources or what can be best described as flaky behavior.  These problems likely stem from things such as improper change management, rushed planning, lack of reliable deployment tooling/procedures, tight maintenance windows, etc.

Are there alternatives to the above and why would an organization use them?  Yes there are alternatives to the above scenarios and I will document one here.  This is only an alternative and is not necessarily the best one for every organization.  Each and every organization has to evaluate their resources, risks and requirements to determine what is appropriate for themselves.  This alternative focuses on reducing deployment errors, speeding up the deployment process for some cases and some other benefits I will document.

The assumptions for this alternative are:
  1. mid-size organization
  2. understaffed IT with wide duties
  3. critical applications with at least a moderate rate of change
  4. basic web application (multiple load balanced web servers and 1 DB server)
The basic technology used for this particular solution are:
  • NAS NFS storage
  • Apache Tomcat
  • Linux based Web servers
Since many deployment errors can be traced back to mistakes updating configuration data manually across multiple servers.  The basic solution is to externalize the primary configuration data to a location outside of the web application where it resides on NFS storage.  Primary configuration data means data which differs between the production and non-production environments - such as DB connection/pooling info, special URL's, etc.  By moving the data outside of the application and web server and storing in a single location you reduce the touch points during future deployments.  If the primary configuration doesn't need to change then you don't need to edit those files during normal deployments.  This reduces the risk of unintended changes.  This can also speed up deployments since you may only require a restart of the remaining web server processes once the shared configuration change is made.

Another improvement to the overall environment includes installing the Java JDK on NFS storage and having a Linux soft-link (in a location on the NFS storage) to the current JDK in use.  Any later JDK upgrade process is faster because you only need to do one install, change one soft-link and start the application up.  These last 2 steps are the only steps where the application should probably be down.  If you have a lot of servers to deploy to, this can be a real time saver.  This also saves some space but storage is relatively cheap so the benefit is likely minimal.

Another idea is to install the Apache Tomcat software on NFS using the delivered ability to specify CATALINA_BASE and CATALINA_HOME separately.  After doing this, setup a Linux soft-link (in a location on the NFS storage) to the current Tomcat.  Each web server would reference the soft-link instead of the specific versioned directory that is typically referenced.  The idea is to extract the read-only aspects (code) of Tomcat out to read-only NFS storage and the individual web servers have the application specific tomcat configuration files and directories (log, webapps, work).  This can speed up the process of upgrading to a newer Tomcat version - especially for minor upgrades.  I do recommend cleaning out any unused functionality from the Tomcat server.xml file.

If the web servers used above are virtual machines, creating more servers to increase scalability may only require cloning an existing server and making sure it is in the load balanced pool.

These same ideas can be applied to non-production applications and some aspects can be shared between applications (like the read-only NFS storage containing Tomcat and/or JDK).  As the number of applications and servers increases, the benefits from these ideas increase.

And yes, you should only do this if you trust your storage system.  I recommend a backup plan for handling any major infrastructure failures.  Having a copy of the NFS data available for installation on the local servers is one possible method.