Wednesday, November 13, 2013

Eclipse Memory Analyzer experience

Skipping to the end of the story - the Eclipse Memory Analyzer is awesome!

We have a high profile and very politically entrenched application which is plagued by many issues outside the control of the IT department.  Recently a consultant with much history with the application produced changes to support a mobile application.  We were told that the changes were minimal - basically bolt-on pieces only.  That did not turn out to be the case, the consultants idea of small bolt-on involved replacing some core Struts configuration affecting the original core application as well as the new code.  Timing of this could not have been much worse since we ended up needing to upgrade to the Struts 2.5.15.3 library due to security issues at the same time.  Anyways, the mobile bolt-on had not been well-tested and neither was the combined application code.  This was determined a bit later when IT went back and did some testing of the mobile code related branch code.

Anyways, IT was forced to fix a number of issues before a Struts upgrade can occur while merging the mobile release and producing a build for deployment.  Due to a lack of adequate functional user resources assigned to the application, testing is typically inadequate.  The build was forced to go into production and then the problems began.  The symptoms were basically high heap utilization, increasingly high CPU utilization followed by instability and finally out-of-memory (OOM) errors being logged after which the application is basically brain dead.

After a couple days of analysis, it was still unclear as to what the root cause was.  There was never  a clear repeating pattern.  We ended up implementing multiple daily application restarts to help clear the memory but that was only mildly effective.  There were some items which seemed odd while analyzing the application with jvisualvm but the problem was "lost in the trees".  Some initial attempts at analyzing the heap dump produced when the OOM error occurred failed because of issues related to the size of the heap dump and a lack of functionality to help identify the root cause.

I had installed the Eclipse Memory Analyzer a while back with the intent of trying it out but never had the time to do so.  At this point, it seemed like a good time to try.  The first attempt was not successful though since I tried it on my "old" laptop which was running Windows 7 32bit.  Fortunately, I had received another laptop a week or so prior with Win 7 x64 which I was in the midst of setting up.  After increasing memory in Eclipse upto around 3200mb, I started the import of the heap dump from our prod system and after around 45 minutes it was done processing and offered up to show leak suspects.  I gladly selected yes and found that it had one suspect which was taking up more than 300mb of memory.  It provided a stack trace which led me to a some particular functionality.  The interesting item found was that data in the stack/trace was indicative that the functionality was accessed by a non-logged in user.  After some communication with the consultant, functional staff and another developer on the application indicated that the functionality should NOT be accessible to a non-logged in user - further investigation followed.  This lead to identifying that a DB call was returning around over 400k rows of data instead of under of 100 rows and was retaining it.  It turned out that the consultant had an "OR" condition which explicitly returned all data to a non-logged in user (even though the functionality shouldn't be available when not logged in..).  After removing the extraneous OR and having the functional team test for any negative effects - the changes were put into production.

The result of the fix was the application was stable and using under 50% of the average memory in use when unstable and CPU utilization averages mostly in the single digits.

So roughly one hour of work with the Memory Analyzer helped solve a problem which was not anywhere near being solved after multiple days of effort.

Hats off the the people responsible for the Memory Analyzer - you have my gratitude!

Now if we can figure out what the consultant did which is causing a non-logged in behavior. 

Monday, October 7, 2013

Jenkins jobs and subversion access via certificate and batch related stuff

Jenkins is a very nice solution to a number needs outside of automated builds.  Use as a general batch solution is feasible at smaller scales.  This has further improved recently with the credentials plugin.  This makes it pretty easy to setup jobs which use certificates to access subversion instead of requiring a specific persons credentials which require a much more regular change.  Of course, this works best when an organization has the infrastructure to securely manage certificates.

[2014/05/20] Works!  It seems easiest to setup 2 separate Apache Virtual servers though; one with Basic Auth and the other with Certificate auth.  

Related to general batch processing, the Elastic Axis plugin is a great solution.  This is a perfect fit for when you have load balanced servers implementing an application and you need to run a job on any one but only one of them.  You can take an application node offline and Jenkins/Elastic Axis will happily pick an available node out of the remaining configured nodes.

My wish list for Jenkins includes a few enhancements which would improve some things to a large extent - this mainly revolves around new/improved support for high availability, multiple masters with fail-over and a clear/clean clustering solution.  A database backed job store might be a big plus depending on how the previously mentioned features were implemented.  I would love to be able to easily integrate Jenkins into our DR environment but as it stands it is a more manual integration.

Java security manager policy mechanism and non-trivial applications

We have a "pest" bothering us and to deal with the issue we decided to implement multiple security mechanisms.  The mechanism I am currently working on revolves around the age old Java security manager policy.  This really should be done at initial design/implementation if it is done at all.  A multitude of open source libraries isn't making this easy.  Not complaining mind you - just wishing that security was part of the initial requirements for all open source libraries.  Unfortunately, there are some pretty popular libraries which are somewhat troublesome and securing them is a bit disconcerting.  I can't comment on what libraries I am referencing out of security concerns and respect for my employer.  I will say that this is making my recommendation for complete removal of the libraries much easier to justify.

The general process to getting the security manager working with an existing application is:
* start application server with security manager enabled
* access application while monitoring logs
* when a failure occurs; update the policy file based on data in the log, clear logs and start this process over again

The app server logging is pretty good; it almost always contains 'denied' in the message and many times the remainder of the content can be cut/pasted nearly as is into the policy file.  There are times that debugging isn't easy; as when there are problems with components which start up early and/or eat exceptions instead of logging.

I would not bother trying to debug the security manager polices by setting
   -Djava.security.debug=all
that just generates way too much useless data.  I think that
   -Djava.security.debug=policy,access
is a better setting which gives a good amount of data to help in resolving failures and understanding what the application is accessing.




Tuesday, September 3, 2013

Welding - Stainless Steel (gasless) fluxcore wire .035 - muffler project

I decided to weld on my new muffler.  It was not very easy finding information on stainless steel (gasless) fluxcore MIG wire.  I finally found a source with some reasonable options for this small project.

http://www.use-enco.com/1/1/44828-308lfco-035-01-welders-choice-stainless-steel-gasless-mig-welding-wire.html

These folks did a nice job; delivery was prompt and hassle free.  The selection and cost seemed reasonable for such a small quantity of wire.

I am pretty new to welding and am self-taught so this was an interesting project overall.  My impression is that the wire could work pretty well with the right setup.  I didn't really have a good way to dial in the best settings and I think my stab at the setup was a bit on the low side.  I ran settings which were about the same as for plain fluxcore wire of the same size.  I was trying to be somewhat careful because I had to run an extension cord to the welder and I was only running off 110v since I have yet to install a 220v outlet. 

For prep, I did run a wire wheel and grinder over the area on the old exhaust where I would be joining the pieces.  I cleaned the new exhaust and some of the connector pieces with some acetone to get rid of the oil/grease/etc residue.  Left plenty of time for the vapors to clear. 

For the actual welding, I ended up laying down a very thick bead and it may have been a bit on the cold side (as a followup, this was true on another stainless project I tried - I think installing the 220v outlet would be wise if doing much of this).  It was hard to tell how much was wire/setup versus lack of skill since I did most of this laying under the truck which probably isn't the best thing to do as a new welder.  I have a lot more respect for folks who do this kind of work - regardless of their skill level.  It certainly was taxing physically and mentally in the heat with the safety gear.

Note: I did put a welding blanket over the gas tank though - better safe than sorry.  I also covered myself with it at times as well.  The blanket was in addition to my welding jacket and leather smock.  Be careful with the welding helmet on; peripheral vision is minimal and I head butted the rotor a good number of times.

Someday I hope to have a garage with a small vehicle lift; projects like this would be more enjoyable that way.  It was still a good learning experience though.

Hope this inspires someone to try something new.  Learn from the experience but be content even if the result isn't perfect.

Sunday, September 1, 2013

Yamaha TTR 125 and Rekluse automatic clutch

Decided to get an automatic clutch for the dirt bike that the wife and I share.  This was mainly to help my wife since she is having some hand strength issues which make use of the manual clutch difficult.

I had debated doing this for a few years but had not wanted to spend the money.  The clutch is not cheap by any means and when you compare it to the cost of a TTR 125 it is hard to justify without some really good reasons.

After I finally decided to buy it, I found out that prices had increased quite a bit.  After some internet searching, I ran across a site that must have had some older stock closer to the price I had remembered.

I received the kit fairly quickly and a quick view of the kit left me the impression that it is designed and machined quite well.  The instructions are somewhat minimal but were clear enough most of the time to get by.  A few extra details would be nice.

I finally started the install.  The first challenge was getting the rear brake bracket off.  The bolt uses a hex key and it was on unbelievingly tight.  A few applications WD-40 and some other similar items combined with persistence finally paid off.  I did end up using a shorter hex key since the one I started with felt like it was ready to snap.

The next challenge was removing the boss nut.  With the bike in gear, I was unable to find a good way to prevent rear wheel movement (without damaging spokes).  I ended up with rope around the circumference of the wheel and tied to some wood placed against the swingarm.  This helped but I ended up using an impact wrench to get the nut off (in under a second).  Later, I reversed the setup on the wheel to install the nut with a torque wrench - this was much more challenging.

The kit has you do a few things like grind down 4 bolt posts.  Instead of using a file for the entire thing I used a dremel and cutoff bit to speed the process along.  A file was needed to complete it though.  The kit provided a piece to use as a guide for the process.  This made it very easy to do without much fear of error.

The assembly of the main kit pieces was simple - just be careful with all the steel/carbide balls - would not be fun to chase around the garage or extract from the engine case.  The instructions have you assemble the main kit with the balls somewhere NOT near the motorcycle which seems like great advice.

Got it all together and was able to go around the yard with no issues.  Even had the wife go around the yard a few times.  All was fine until I stopped and pulled in the manual clutch.  It was like it wasn't connected at first.  After that, when the bike was in gear it acted like the clutch was fully disengaged - no attempt at movement.  Started disassembly to see what went wrong and looked good until I got to the throw out.  The nut that the instructions had me move to the bottom of the thrust washer had loosened significantly.  I ended up using a wrench to tighten the top/bottom nuts against the washer.

TODO - will add some more comments on reassembly and testing/trying it out once I am done with it.

First fully successful test ride complete!  Works now!  It was definitely worth the price to see my wife tooling around the yard with a bigger than normal grin (under the helmet).  It made a huge difference for her since she didn't have to worry about stalling or starting off with a wheelie by mistake.

My only concern after my final round of assembly is that the shifter seems stiffer than I think it should.  This may mean I just need to adjust the clutch at the handlebars a bit.  Hoping that is the case and that I don't have to tear it apart again.  I saw a blog entry somewhere where the person indicated that they had better results with 4 threads (instead of 5) showing on the throw out. 

[2019/01/27] After appearing to be off the market for a while I've noticed that the auto-clutch is available for the Yamaha TTR-125 again - found at Revzilla web site.


Friday, June 28, 2013

JEE 7 JSR375 batch support

Ran across a few articles on JSR375 (JEE 7 Batch).  Interesting but is there enough benefit in this to convert from any existing solutions.

What I really wish for is a wider variety of low cost scheduling solutions.  Something more than a cron or Quartz like solution.   My core requirements would be something like:
  •  100% Java
  • Clustering/Load balancing support 
  • fail-over 
  • Job chains
  • agentless support
  • remote job execution
  • DB backed job status, job defs, job history, etc
  • Nice UI for monitoring and job definition maint.
  • Execution of SQL jobs
  • Some "easy" way to support disaster recovery needs

I was intrigued by the SOS Berlin JobScheduler (http://www.sos-berlin.com/modules/cjaycontent/).  The UI had a few usability issues and it isn't 100% Java; seems like it was only supported in a 32bit environment which would be a headache.  For a free/open source product though - it has some very nice enterprise level/type features.

In the meantime, just sticking with Jenkins CI as poor mans scheduler and still thinking about writing some plugins to fill some gaps. 

Friday, June 7, 2013

DIY Wood flooring tips, tools and other stuff

We decided to replace our carpeting with 3/4" prefinished tongue/groove wood flooring.  With the cost of the wood alone being so expensive I decided to go ahead and lay it myself.  Ended up laying 57 of 61 boxes.  I will note that I am only moderately handy and have never done this before.  My job and skills are much more on the computer/technical side than carpentry side.

Summary:
  1. Prefinished versus unfinished flooring
    1. Prefinished flooring takes less time since you don't have the finishing steps
    2. Prefinished can look very nice
    3. Prefinished with beveled edges helps with hiding minor imperfections in the floor.
    4. Prefinished is more susceptible to nicks on the edges due to unskilled nailer [i.e. me] and tiredness causing the nailer to twist slightly.  Maybe a better quality nail gun would help or more skill than I had.
    5.  Prefinished was less messy and allowed simply moving furniture rather than removal or very careful covering.
    6. Unfinished would allow for fixing minor nicks, etc during the sanding stage.
  2. Finishing around brick fireplace
    1. Seems like most people recommend either framing around a fireplace or laying the wood against it and filling gaps with an appropriate caulk.  Personally, I don't like the look of the examples I saw on the internet.  These are the easiest/cleanest/fastest method of handling a fireplace.
    2. Undercut the brick so the wood slides slightly under the brick.  This turned out to be terribly messy but the end result was fairly nice. I recommend taping plastic sheeting from ceiling to floor all around the fireplace and wear a respirator.  If possible, you may want to hook up a heavy duty ventilation fan (type that comes with a large diameter hose) and vent it out a nearby window.  Don't bother trying a regular fan and flexible ducting - waste of time (er, not that I tried it but my wife may still be laughing a bit).  The jamb saw and masonry blades worked incredibly well for working with the brick.  I think I went through 3-4 blades for our fireplace.  Be VERY careful on the corners, make sure you cut plenty deep because when you go to chip out the brick below your cut; the corners tend to break and take material you didn't intend. Epoxy, brick chips and maybe a bit of brick dust help fix the problem..
  3. Things to be aware of
    1. Make sure you pound down any raised nail heads in the underlayment. They will cause much grief otherwise (noticeable rise compared to surrounding wood, snag during nailing causing gaps or a need to rework).
    2. Try to mix multiple boxes together to prevent a potential patchy look if a box happens to not be overly varied.  Our wood seemed fairly varied in each box but it could be different by flooring source or flooring grade.
    3. I took down all the preexisting floor molding with the intent of installing the flooring closer to the wall and then simply putting the floor molding back in place to cover the required gap instead of using shoe molding.  This would have worked EXCEPT I had not considered how much the house had settled over the years which resulted in places where there was nearly a 1/2"+ gap between the bottom of the molding and the wood floor.  I ended up getting unfinished oak shoe molding and staining it myself (with help from daughter - nice bonding time).
    4. Be careful not to OVER compress the wood during installation.  I have one piece which split and now I am learning how to cut out and replace a piece.  Not easy and not sure the result will be as nice.  It is somewhat hard to predict which pieces may have week spots.  I will note that the more interesting/swirled grain patterns in some of the oak pieces seem to have shrunk a bit more during factory drying.  Those end up with a somewhat lower height compared to neighboring pieces and sometimes are 1/32-1/16" thinner as well which you may not realize until you go to put the next row in.
    5. Particle board is not recommended as the underlayment for the flooring.  The nails don't hold in place as well.  We had mostly carpeting before this and particle board is popular and probably reasonable for that but I wonder a bit about long term durability.  I ended up using extra nails to help.  
    6. You may very well find really uneven underlayment once you rip out carpeting - especially near something like a bay window.  Fix it or it will show.. This is another place where maybe putting down a thin plywood may have helped (along with fillers or whatever) me fix some of the uneven spots.  Hind sight is 20/20..
    7. Cut the door jambs earlier than later.  You don't want to be cutting near wood that is already laid - it is really easy to slip and nick it.
    8. I normally did not face nail pieces near the wall.  I would move from the floor nailer to a finish nailer and then a brad nailer as space got tight.  This did result in a few problems (like breaking the spline) where I tried too hard to use a nailer instead of just glue.  I used titebond 3 wood glue to glue the last 3 rows or so together or other places where there wasn't a good way to nail it.  Using these methods meant that I had to wedge spacers between the wall and last piece of wood to make sure no gap occurred. 
    9. I did put down the flooring paper provided by the provider of my flooring.  Not sure of the quality or whether it provides our specific installation any specific benefit.
    10. You can try manually putting in finish nails in the last row or 2 (into spline just like floor nailer) but I recommend pre-drilling the holes and be prepared for a difficult time.  I messed up numerous splines and spent lots of time trying to get the nails set far enough to prevent snagging things.
    11. Check your furniture for feet that will dent the flooring.  If you have step stools; put felt on any part that has ridges that rest on the floor! Took a day to figure out where all the dents in the kitchen suddenly came from - a bit depressing.
    12. Ah, should be obvious but just to have it in writing; only nail into the spline of the wood, not the grove.
  4. Floor Layout
    1. There is a main main beam which runs the length of our house down the center.  The floor joists run from each each of the house to that beam.  I laid out the flooring so it would cross the floor joists.  I felt this had the best chance of reducing flex. 
    2. I started the flooring in the hallway which runs most of the length of the center beam of the house.  I started near one wall in the hallway and tried to work most of the length of the hall and ending at the dinning/living room.  I used a laser level to try and keep the row as straight as possible.  This was VERY important, any waviness in the row results in hard to nail rows later (gaps and lots of floor jacking required).  I did try to add some back support to the initial row and tried to keep the next couple rows dry fitted to help keep things straight.  I could have done a better job on that first row - it would have made the rest of the house easier.
    3. I did the flooring in a continuous pattern.  i.e. it is laid out lengthwise in the entire house and there are no breaks or direction changes throughout.  This is where the wood spline came in handy - as in moving into a room from the hallway and then having to reverse direction (move back toward the hallway) to do a closet.  I usually added some titebond 3 to the spline prior to nailing.
  5. Shoe Molding
    1. Compound miter saw was great for this.  I am thinking that a manual saw of good quality with a good quality miter box would have been more efficient (I would have been more likely to have it in the house instead of running back/forth to garage).
    2. I stained the cut ends as I was installing this.  Try to make sure the dry fit is fine before staining because you can get some goo build up on the miter saw blade otherwise.
    3. For inner corners I used the miter saw to start and then the dremel drum sander to cope one side.  Google for "coping molding" or "coping joints" to find some good examples.  Take your time, it will show.

Items I used:


Harbor Freight 3-1 pneumatic nailer/stapler with 2" cleats

Jamb saw and some blades for masonry (for undercutting fireplace), included wood blade fine for door jambs

hard wood spline; note that some flooring places have no idea what this is.

Flooring jack from Harbor freight was indispensable.


Hearing protection for everyone around while you are working is VERY important.

A variety of pry bars, heavy duty putty knives, flat screw drivers,etc.

Oscillating tool with a variety of blades (wood/bi-metal) and size
Flooring spacers/kits; wedge between wall and final glued row to remove gaps between last row and previous row while glue sets.  Also can use at ends of rows to keep a consistent end gap.

Stuff not pictured include:
  •  Air compressor  
    • There is a trade off between size, portability, convenience and usability.  In my case, running a couple of air hoses from garage into the house through the fireplace ash hatch worked but was an eye sore for a very long time.  With only a single nailer going, a small 125psi compressor would likely be fine.  Don't forget some pneumatic oil for the tools unless you like clearing nail jambs or fixing the result of nail feeding issue.
  • Pneumatic finish nailer and ~ 2" nails
  • Pneumatic brad nailer with a variety of brad sizes ~1-2"
  • 10" Sliding compound miter saw
    • This was the main saw I used since I laid out the wood in a way which mainly required cutting off length.
    • Use a good quality blade to reduce chipping/splintering
  • 10" table saw 
    • mainly for reducing width for final row near walls. Sometimes bottom beveled some pieces to ease getting them into place against the wall.
    • Use a good quality blade to reduce chipping/splintering
  • Jig saw with a good quality wood blade.  I tended to use this for odd intersections like at vents or at the corner of the fireplace.  Make sure you are able to clamp the wood down well.
  • Hand saw, hack saw and some spare blades (hack saw blades were useful in trying to remove a botched nail job by cutting off nails below boards after slightly prying them up).
  • Wood chisels for a few odd situations
  • Dremel with sanding attachment, small carbide wood cutting disk, etc. I tended to use this when I made hard to fix mistakes like nailing down a board which I had removed the end mortise from but needed one still.  Just use dremel to cut a new mortise.  It pays to keep cut pieces in nice separate piles (i.e. cut on left or cut on right).
  • TiteBond 3 wood glue
  • Brick chisel
  • hammers
  • some 1 1/4" finish nails (non-pneumatic nailer)
  • pin punches/nail sets of various sizes (manually finish setting nails which didn't sink deep enough or put in some manual finish nails)
  • nippers (for cutting off nails in trim or flooring boards instead of trying to remove them)
  • variety of pliers
  • tape measures
  • straight edge
  • drill and various small bits
  • work gloves and eye protection as needed
  • Band aids and tweezers - there will be splinters
Final result around fireplace.  Undercut brick with jamb saw w/masonry blade. (Very carefully) chip out brick with brick chisel until wood can slide partway under brick.

Apache Archiva 1.4 m4 versus Artifactory 3.0.1 - access from Eclipse 4.2.1 w/maven integraton

The pain of managing libraries/dependencies differently between the build server and individual workstations combined with differences by application has been a large burden.  I finally got around to starting to do something about it.

I started with the Apache Archiva 1.4m3 & m4 versions.  I picked these versions since they are architecturally closer to what I am aiming to support in our environment versus the 1.3.x versions.  The main issue I ran into was with getting the software working with our active directory.  It may be an issue with my configuration but there was a lack of documentation to help me past the problem (NPE while getting information regarding the user).  The default logging and error message was not helpful.  I threw in the towel after about 4 hours on the AD issue.  My general impression of Archiva is that it has plenty of potential but is not quite polished enough for our environment.

My next attempt was with Artifactory 3.0.1.  I was able to get this up and running in about 30-45 minutes.  It worked with our AD after comparing configuration with a few samples on the internet and the documentation.  I was able to very quickly import an existing Maven repo and get a virtual repo setup to front our internal resources and all external repos.

Currently we  use Ant with some Maven tasks to manage dependencies in our Jenkins builds while the source tree includes unmanaged baseline libs used during workstation builds.  My overall goal is to use Maven to manage dependencies for both Jenkins and workstation builds.  I may not commit to Maven driving the builds but managing dependencies is a big help.

Next I waned to test whether I could successfully get Eclipse/Maven/Artifactory to work nicely together.  I had tried to get Eclipse/Maven working together in the past but had poor success (involved Subversion as well).  In this instance I started simpler by creating an Eclipse dynamic web project which was not under source control.  I converted it to a Maven project.  I also installed Maven 3.0.5 on my laptop and configured the Eclipse/Maven integration to use it.  I modified the settings.xml file to only use our remote Artifactory instance.  After a number of attempts with mixed results I was finally able to determine that dependencies I added were resolved from Artifactory.  My only complaint with the Maven integration is that I could never get the Artifactory repo instance to expand and list the contents like it would for the Maven Central repo.  This caused me to believe it was not working for several hours but when I cleared my local repo and configured Eclipse/Maven to cache/index stuff at startup it did pull in the dependencies I had added.  I lost a few hours of life I won't reclaim because of that issue.  If we were not using a secured Artifactory instance, I have a feeling it would have been much more straight forward.  There was not a lot of nice clear documentation or user blogs with pertinent information with regard to using a password project installation combined with Eclipse/Maven.  In the end, it appears to work for shell of a project.


So far, I really like Artifactory.  It is very easy to setup and pretty easy to use and has reasonable features in the open source release.  It may bug me later that a number of features are sort of visible but unusable unless you pay.  I'm not much for having carrots dangled all over the place.


Next I will be trying to get it working with an existing project in Subversion. Will update this with those results once  get around to trying it.


Thursday, May 23, 2013

HTML 5 and audio tag - browser support

Having to do some unplanned development which involves audio in support of accessibility for the vision impaired.  I was pleasantly surprised when some initial prototyping (aka figure it out and get it into production yesterday) ended up working correctly early on when testing with Firefox.  When I moved onto Chrome the excitement started to fade when Chrome refused to allow replaying an audio clip.  And then the mood went further south when trying IE and finding that the HTML 5 audio tag (at least in IE 9) doesn't work with wav files.  And of course, other documentation indicated that Firefox didn't support MP3 with the audio tag - which doesn't matter much since the only format available to me is dynamically generated WAV data from a third-party servlet.


Fortunately, the 3rd party stuff is open source.  It hasn't been updated since 2009 but the good news is the underlying framework, Java Sound, now supports MP3 which it didn't in 2009 it seems.

I think I am heading down the road of reworking the 3rd party good to support MP3 at this point.  I had looked into a multitude of other possibilities and none were panning out.
  • audio.js
    • The fallback to Flash isn't working and appears to be related to some specific data aspects of the WAV data we are using.
  • Found some thread talking about code which needed native libs which I don't really want to deal with.  Been there and done that and in our environment it would be a disaster waiting to happen.
  • Found a thread talking about NestedVM which allowed conversion of native code to Java bytecode.  My quick review of this found some pretty negative items.  
  • Other weird or impractical ideas which I may include/update here at some point but it is getting late.

Anyways, it looks like the Java sound api has improved and includes much more functionality now and appears to be the best option which I hope won't turn into a blackhole for my time.

Wednesday, May 22, 2013

Struts 2 issue

A dire emergency drove me to work on some substantial changes in in session handling and implementation of some security mechanisms.  While working on this, it became apparent that struts was not calling the execute method on some actions.  There was a good amount of logging added and not a single line was getting output.

For a brief moment, I thought I would have to immediately upgrade - it must have been a struts 2 bug. This may be the case but it was worked around.  Unfortunately, I did not have time to fully debug into struts to determine the true root cause.  The items that in some combination got things working again included:
  1. Implement Action interface - we had implementations of the execute method with the correct signature but not as part of the Action interface.
  2. Replace use of the old/deprecated filter with the use of the newer filter org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter
  3. Wrap some code in try/catch and do some defensive coding to prevent null reference access
  4. Use some API calls in a more consistent fashion so that sessions are created under more controlled circumstances.
I would love to go back and debug into the roof issue here but there is never time - part of the organizational culture.  As much as I am annoyed by the person/people responsible for instigating our issue, it has put security back into the light again (for now).  Security is not a static item and the cost of treating it otherwise is high in many ways ($, time, respect, etc). 


Thursday, May 9, 2013

Beginner welding experience - Thermal Arc 211i

I have wanted to learn to weld for sometime - I have never handled one before this.  I finally broke down and purchased a Thermal Arc 211i which is a 3 process (DC) welder - TIG, Stick and MIG (gas and flux-core).  I researched welders for over a year and was finding many positive references to Thermal Arc machines.  I was originally going to get the 181i but decided that the lack of 110v support in it was enough of a differentiator to drive me to the 211i which can work with 110 or 208/220v.

Not to spoil the rest of the post but I REALLY like the welder.

I purchased the machine from CyberWeld - they had a good price- not the absolute lowest but close and I found no indications of problems with their support which was not true of a place with a slightly lower price at the time.  I received the unit pretty quickly and without any issues.  I put in the paperwork to get a free auto darkening helmet (while supplies last..) and received it in around a week which was pretty nice.  I had already picked up some cheap tools/helmet/smock/gloves from Harbor Freight as well but this is working out well - I am going to try and teach my kids a little as I learn as well.

Best features of the Thermal Arc for me:
  • 110v and 220v support. 
  • Wonderful value - amount of functionality versus price is great.
  • Can get MIG spoolgun, TIG torch and foot control when ready
  • Not outrageously heavy and has decent handholds on front & back
  • Documentation is pretty good
  • components feel pretty solid and of good quality
  • setup chart seems close in my limited use
 Lows for me:
  • The supplied SMAW rods were marked for type (6013) but not diameter.  My lack of actually checking the size wasted several hours of my time, caused some heartburn and almost resulted in an unnecessary support call.  I was trying to verify things were working by welding a small piece of scrap but was using too low of an amperage for the rod and couldn't get a spark.
  • Maybe a minor low that the MIG gun trigger is pretty sensitive.  I may have to just find some less bulky gloves.
My first actual project was to create a second lower wood loft for my spare wood, piping, etc in the garage.  I was making this out of angle iron, 1/8" flat stock and 1" square tubing using .035" flux core MIG wire.  Setting up the 4" spool was somewhat challenging to me - don't try it when you are tired already.  There is a fairly strong spring you must work against to attach the spool - a bit of an Abbott and Costello type comedy action going on for a little while.  I'll spare everyone the details for now.  I do wonder whether it would have been easier getting the large 12" spool on - maybe try someday.  As a side note, I picked up some Bosch  metal cutting bits for my jig saw and that worked really well for this project.  My welds so far are quite ugly - lack of skill/experience and the flux core combined make that unavoidable. 

I think getting the spool tension and such just right is a bit of an art form.  One of my first weld attempts resulted in the wire punching into a piece of flat stock hard enough to pop it loose from the welding magnets I had holding it in place.  This was after I got the helmet settings to the point that all light wasn't blotted out completely when the arc was going.  I'm starting to get more comfortable with the helmet and feel I can get down to what some of the books/manuals state as a recommended shade value for the helmet.  I was able to see the weld pool to some extent on my last welds but couldn't see far enough ahead to tell when I was about to run off of the object.  Nothing like going along fine and then having 4+ inches of wire hanging out in space.  At which point you bump the target object and make a mess.. not that I know anything about that.

This loft was a success and I have some new steel studs I am going to use for my overhead loft which is currently attached to the garage ceiling at the outside corner (no pole to floor).  I am going to switch from ceiling attachment to a pole (steel stud)  - I am getting a bit concerned about the weight I have up there possibly causing issues with the ceiling/wall.

Planning on looking for some steel later this week to try and build a welding cart.  Seems like a good way to practice.

All in all, the Thermal Arc 211i is a really nice machine and much more capable than I am for now.  Looking forward to practicing and learning TIG as well.  I may try the SMAW process a bit as well but my first successful attempt at that was not nearly gratifying as using the MIG gun.

[Edit 2014/02/25] Welding cart/table 3/4 "complete".. a bit of a joke since just like computer software, there is always "one more thing" you can add or change.

Summary of cart/table:
  • 18" x 30" main cart top with ~ 18" x 36" base
  • height adjustable folding side extension 18" x 36" - folding extending legs with pins to lock extension
  • 1 drawer [side]
  • 3 extendable/removable hanger poles - general hanging needs but also thinking about using with welding blanket for special needs
construction
  • 1/4" plate for tops
  • 1" tubing & 1/4" thick angle iron for the basic frame.
  • spare pieces of "super strut", angle iron, a smidgen of flat bar for the folding/height adjustment aspect of the extension table. The super strut pieces form a guide for 1" tubing which slides up/down where the extension and main cart meet.  The tubing is too small for the super strut cavity so a small piece of flat stock was hand fitted and welded to the tube so that it doesn't rotate, etc when changing the height.  I might try to add a diagram showing this at a later date - a bit hard to describe.
  • The drawer body is 20ga steel.  Using some slides from Lowes which are ok..  Some angle iron and 3-4" wide bar stock are welded to the bottom of the cart top.  I attached the slide to the flat stock with rivets.  I also attached the drawer body to the slide with rivets.  I almost hate to describe the drawer body because folks will probably fall down laughing.  Anyways, just think of it as Christmas present wrapping.
  • Cart bottom is 20ga steel
  • The poles are 3/8" stainless steel which slide in short pieces of 1/2" tubing with a piece of nail in a through-hole supporting them when raised. [NOTE: Practical example of knowledge - the stainless is not magnetic so the magnetic pickup was useless - not fun when you know you must cleanup WELL so the kids don't run through it with bare feet!]
This was really an experiment and nothing like a production design.  It won't win any awards (although my kids said maybe an "ugly weld" award might be possible).  I did learn a lot and the result is extremely useful even with a few warts here and there.

Todo:
  • add 2 more drawers [on front] - still experimenting as I create these without any reliable tools for bending.
  • add 1 more extendable/removable hanger pole
  • Modify rear base area to support Argon/CO2, etc tanks [flat plate as bottom and add straps]
  • Replace current rear casters with something larger and easier to roll over transitions between the driveway/garage/sidewalk.  This will be even more important once I have shielding gas tanks.
  • Considering making the extension table a little more flexible in height adjustment by using cam levers [search 'cam lever' on Grainger] on the legs.  Trying to determine if they can handle a reasonable amount of load from hammering and such on the table top.





Here is the "final" cart; I don't think I am going to add argon/co2 storage to this since it has gotten extremely heavy.  The only other "someday" change is better height adjustability of the side folding/extension table.

I'll have to start dreaming up plans for handling argon/co2 tanks sometime.  I would love to try some TIG work  but I think I exceeded my hobby budget for "a while".  I only have DC TIG capability so while it is possible to work with aluminum per a bit of research; it looks like the results won't be as clean and it may take more effort to get even "reasonable" results.

Wednesday, May 8, 2013

Linux/Unix application configuration methodology

We are experiencing substantial application sprawl at my current employer while staffing either stays stable or decreases.  The combination of 9-5 normal development/operations/support combined with more after hours time for production operations is leading to increased human error and general difficulty in managing the movement of application & settings between dev/test/prod environments.  A contributing factor is the fact that each application has a slightly different method of handling configuration data.

This lack of a standard template and methodology for handling configuration was not sustainable.  This lead me to identify a number of core issues and work toward a process the mitigates them.  Some common problems include mismatches of configuration data between servers in a specific environment, changes not getting propagated between environments and incorrect configuration propagating between environments.

The mitigation methods include:
  • Use of a source control system (Subversion) to contain all application configuration/settings data.
  • Configuration files named based on server name/environment - typically for properties files
  • Spring XML configuration files keyed by environment
  •  Environment setup broken out into up to 3 distinct files - one for each of application, server and environment.  These files are named based on the type of data.  The general idea here is that application wide settings reside in a shared application wide (all server, all environments) file.  Any environment specific profile setup is in a file named based on the environment type.  Each server has a file based on the server name and is primarily responsible for setting the environment type.
  • A shared profile processing driver that each server uses to "source" the appropriate configuration files.  There are some utility functions shared between all systems. The application wide environment setup profile plays the role of the driver and used the utility functions to process the server specific environment profile first, followed by the environment specific profile and finally completes its processing of the application wide environment settings.
  • Utility functions include an item which displays the server name, application name and environment type.  This is called at login and is available to operators to quickly verify they are operating on the system they expect to be on.

This methodology is combined with the use of NFS in production and sometimes non-production environments.  There is an additional benefit of this combination in the production environment.  There is no direct network server to server communication between the production and disaster recovery (DR) environments but the production NFS storage is replicated to the DR site.  By including all DR environment settings/configuration on the production NFS storage, it automatically gets replicated out to warm DR servers.  Some additional shell scripting, a couple cron jobs and a standardized but special app deployment method result in the ability to have the DR application/web servers automatically deploy the production application versions in an unattended fashion.

I am still evaluating this mechanism in combination with Jenkins jobs and remote nodes for a more automated production deployment. 

Overall, there is a little learning curve for my team but this is already providing some benefits including traceability of change, identification of unapproved changes (run Jenkins jobs which check for correct versions of settings and local server modifications) and easier difference identification/handling via Subversion diff/merge functionality.

One other item of note is that the Subversion "externals" functionality is somewhat key in the sharing of some items which need to be available to all applications but would be awkward to manually synchronize or duplicate. 

Thursday, March 14, 2013

Java code - correctness and consistency

It has taken nearly a month to cleanup most of the data access code (DAO classes) mainly produced by a consultant.  I will say that some of the changes are "nice to have" items but there were plenty of "must fix" items with a few of those still remaining.  The fact that there is lots of client code, little documentation and a lack of coherent conventions drove me to start looking for ways to get things in better order and keep them that way.

The initial steps in the cleanup of the "must fix" items:
  • Implement the interface java.lang.AutoClosable in the base class of all the DAO code.
  • Provide a factory type instance into the DAO instances which acquire DB connections instead only when absolutely needed (which isn't 100% of time in these "DAO" classes).
  • Replace the previous DAO connection close() logic (which was never in try/final blocks)  with the Java 7 try-with-resource constructs everywhere.
So at this point, we have reduced the amount of DB connections being tied up needlessly and have plugged all the known potential DB connection leaks. 

Some other changes involved conversion from one cache solution to JBoss Infinispan 5.x and a different method of handling cache keys for the DAO data.  I ended up with a somewhat more complex solution for this to help reduce the chance of keys being duplicated and used for different data items.  The Java enum type is great for this and being able to implement interfaces on them provides a lot of flexibility.

The next problem was how to sort out data flow, mutability and nullness consistency.  I try fairly hard to keep a certain consistency in what I return from DAO methods.  I find that returning null tends to produce more errors through forgetting to check for it in all cases.  This can be mitigated by not returning nulls whenever it makes sense.

I find this most useful when dealing with collections - return Collections.emptyList() (or use similar methods for sets and maps) instead of returning a null where there was no data to return.  This works when you return something the client doesn't mutate.  Other use cases might be handled by returning a new mutable empty list (but only if truly needed), requiring the client to make a copy of the object or possibly having the caller provide the collection.  Wrappers and immutable collections are quite useful to prevent modification to the returned object when there is content in collection.  Consistency is helpful to fellow developers who may not know the application or frameworks as well as the original developers.

One deficiency of Java is that it is impossible or nearly so using just the core language itself to fully define mutability constraints and expectations.  Most of the time this must be done via Java doc with the hope that it is actually correct.  The current project I am trying to sort out lacks usable Java doc or comments on what the best constraints should be.  I am starting to work with an annotation framework to help document, report and enforce various constraints (mainly nullability and mutability).

      http://types.cs.washington.edu/checker-framework/

I am trying to utilize it via an Eclipse plug-in with some success (but not complete).  I think the nullability checks are working but I am running into problems with the mutability annotations.  I know there are some limitations using Java 7 which should be removed or minimized with Java 8.  It could also simply be related to the Eclipse plug-in or the use of Eclipse 4.2.  The embedding of the annotations in Java comments has both good and bad aspects. I like not having to worry about the build server having the annotation jars, etc but I would like Eclipse to tell me when I make an obvious mistake well before I try to generate and analyze results.

If I am able to get this working correctly for my main cases, it has the potential to provide a much higher static gurantee that object instances are only being used in the way I intended them.

Other useful references are:
JSR 308: Annotations on Java Types
JSR 305: Annotations for Software Defect Detection
Java Modeling Language (JML)

Once I am able to get a more rigorous definition of how the DAO data is used in the current application, we can either further improve the plain JDBC based solution, move to an internal framework which uses Apache DBUtils or possibly move to Hibernate.

I am slightly leaning toward Hibernate because I think it will reduce our custom code, remove/reduce the need for the manual caching of data, possibly improve performance and generally reduce the number of sources of problems.  I only say "leaning" until I gather enough info and prototype & evaluate some partial solutions for comparison.

Wednesday, February 27, 2013

Java / Struts2 I18N and DB backed ResourceBundle

I am a strong believer in deployment processes which don't involve making a patchwork of changes to a previously deployed WAR file in a production environment.  The downside to that is things like minor changes to properties files require a full deployment (and across a number of servers in this case).  The time consumed by this combined with the fact that the normal maintenance window is at a time I would rather spend with the family drove me to look for other solution.  After some searching, it seemed that the most reasonable solution fitting our needs involved moving the data to the database and writing some code to leverage the ResourceBundle framework.  It was hoped that struts would seamlessly work with this.  That didn't turn out quite true.

For the moment, I ended up having to force the pre-load of the Resource bundles (English and Spanish) before they seemed to be found fairly reliably. There is some odd behavior when working with our custom ResourceBundle.Control which could be a root cause.  There are still some problems where some application areas seem to not pickup the Spanish data and this may be an issue with struts.  Further debugging is required - hopefully this can be resolved fully even if no optimal solution is found.

I truly wish that DB backed ResourceBundles were supported directly by struts 2.  I believe some other newer frameworks support this - maybe it is time to revisit framework decisions.

[2015/11/8] Notes added below..

My current use of this works by pre-loading the bundle from the DB before the first real need. After that point, the cached bundle is returned during request from struts, etc.  I am using the Spring framework to initialize/load the ResourceBundles early in the web application startup - thereby getting the bundles cached.

Link(s) I think I found/used regarding DB backed resource bundles originally.
DB backed resource bundle reference 1

Potential Issues:
For ResourceBundle.Control the Java doc for the "needsReload" method says
"The calling ResourceBundle.getBundle factory method calls this method on the ResourceBundle.Control instance used for its current invocation, not on the instance used in the invocation that originally loaded the resource bundle. "  
and the "getTimeToLive" method Java doc says
"All cached resource bundles are subject to removal from the cache due to memory constraints of the runtime environment. "
This seems to leave the possibility that a bundle could be dropped from the cache and would not reload properly since it wouldn't access the correct ResourceBundle.Control instance which is only used during the application initialization.  I ran into something that acted like this (but without memory pressure that I am aware of) with the result of getting resource not found exceptions. I'll be looking into this at some point.

I would really like to propose some Java change that would prevent the above potential issue but some aspects of the JCP membership agreement and my employer make that difficult.  I had also considered that maybe a change at the JSF2 (my current focus) layer could work but have not looked into it any further.

[2016/05/19] New info.
I ran across something useful and likely better than my original design.  It is facilitated by the new as of JDK 8 ResourceBundleControlProvider interface.  See the Oracle docs here. When time permits or the need arises (which might be soon) I will give this method a shot.

As an alternative, I have considered creating a ListResourceBundle that still uses a database for the data.  The downside of this is still the need of a class per bundle because resolving a bundle is done by looking for a class name reflecting the bundle name and locale. A superclass could probably be created which does all/most of the heavy lifting and sub-classes would exist mainly for Java logic performing the bundle to class resolution..

The more I think about it, the more I think the ResourceBundleControlProvider is probably the better way.  It will be more complex - without further research I am concerned about getting the database connectivity into provider early enough to be useful and without having class loader issues.  Much of my DB connectivity uses Spring based configuration and involves a number of dependencies.  I'd hate to create a new configuration method to get connections into the provider but it could be needed.  I'll have to prototype it to verify.