Sunday, September 20, 2015

Laundry room - vinyl floor, water filter and clothes hanger rack done

Finally done with all the vinyl flooring in the house. Yeah!

The laundry room is just a rectangle with only 2 cutouts (dryer vent and 1 AC/heat vent).  Laying the actual floor went pretty easy.  We still ended up tracing the room and cutouts onto paper and using that to cut the vinyl outside. This got us close but we still had to do a little trimming here and there once we got it down.  The contact cement like glue we used in the bathrooms worked ok here - better than in the master bath. I'm still working on the threshold - that is always a challenge I am finding.

Here is the final floor result:




The main challenge with this room was the plumbing I did first.  I'm a bit tired of plumbing now.  The problem started with the cold water valve to the washing machine - it was "broken" and could not be shut off.  Replacing the flooring required removing the washing machine and dryer but that was difficult with the broken valve.  I decided to replace the existing gate valves with 1/4 turn ball valves instead of trying to repair the seal in the existing valve (which is what I found was wrong).  To replace the valves, I had to cut a hole in the wall to cut off the existing copper tube. I initially decided to use "Shark Bite" style push-in connectors to connect the new valves (with copper stubs out that I solder on) to the existing copper. I ended up changing that but I'll explain that in a minute.  Once done though, I decided to make a small wooden panel to allow future access to the PEX/copper connections instead of just patching it back over.

My wife has been wanting a water filter on the cold water supply to the washing machine.  I figured this was a good time to install that since I was always messing with the plumbing.  I have tools to install PEX tubing so I used that along with a few appropriate copper fittings and a shark bite fitting to install the water filter on the wall next to the box containing the washing machine valves.  On my first test after installation, I turned the water on too fast and the shark bite fitting at the cold water inlet to the filter popped off and sprayed everywhere.  It was at that moment I decided to go back and remove the 3 shark bite fittings and use PEX, Apollo pinch clamps and solder-on PEX to copper fittings.  It took well over an hour to get the 3 shark bite fittings off and my hands felt like mush when done.  I had to go under the house to solder on copper to PEX fittings - I decided to install those on a horizontal section since I didn't trust my uphill soldering. It took about 3 hours or so to get it done; would have been less but I had to cut off the clamps on my initial PEX work and redo it all. I used new sections of tubing since I wasn't sure if crimping the clamp in the same spot is OK or not - better safe than sorry.  With the PEX in place and correctly tightened threaded connections, the water filter works fine.

A few details on how I mounted the water filter.  I took 2 copper coated straight pipe hangers and bent them on my vise in a stair-step shape (2 bends).  I then enlarged a middle hole in each one to an appropriate size to go over the pipe/connection on each side of the water filter.  I'll have to go back and get some pics at a later date.

Here is the water filter setup below.  I added a couple bolts to tie the 2 brackets together which helps stiffen things up and keep it from moving much. I do need to be a touch careful since the filter housing will swing a bit forward/back - maybe I will add a small attachment to the bracket to the filter top since it does have some appropriate holes on top for that.

 Another minor but unexpected need showed up after running a few loads of laundry - we now have a water hammer issue.  A quick trip to Lowes and about $10 for a water hammer arrester should fix that. 

The one you see here is: Water Hammer Arrester

 


Another detail desired by the wife was a place to hang empty clothes hangers again.  For this I bought 2 pipe hangers of the metal loop style with a straight section having some holes which are intended for fasteners.  The fastener holes include some which are 1/4" in diameter.  It just happens that I bought a 4' section of 1/4" steel rod which just fits in those holes.  So I cut the rod down to about 32" and cut off the loops from the pipe hangers - leaving just the straight sections with the small holes. I bent those into and "L" shape and screwed each using the long leg of the "L" to the underside of the shelf above the dryer about 31.5" apart with one end very close to the wall.  I then took the 1/4" rod and drilled a 3/32" through-hole in an end.  I fed the rod through the 1/4" holes in the short leg in each bracket.  The rod end with the 3/32" hole is near the wall and a small piece of wire was run through the hole and twisted to prevent the rod from slipping out of the brackets.

The original hangers early in the process.



Not a great pick but here I am drilling the 3/32" hole in the rod with the drill press.  I used a thin (just under 1/4") piece of scrap wood under the rod in the vice to both raise the rod to where the relatively short bit can reach and to provide a good place for the bit to penetrate into on the far side of the rod.

Here is what the final product looks like with a little paint on things. I could raise it about 1/2" but as is only 1 style of hanger rubs a little on the dryer.


Hope you enjoyed..

God Bless!
Scott

Thursday, September 17, 2015

Apache Camel - sql component failure query limitation

While implementing a ServiceMix 5.4.0 integration (Camel based) I ran into a problem which I really should check into/report when time permits. 

The route is defined like this:
        <route id="processWidgetCreate-route">
            <from uri="sql:{{sql.selectWidgetCreate}}?consumer.onConsume={{sql.markWidgetCreate}}&amp;consumer.onConsumeBatchComplete={{sql.selectWidgetCreate.batchcomplete}}&amp;consumer.onConsumeFailed={{sql.markWidgetCreateFail}}&amp;consumer.delay=60000&amp;maxMessagesPerPoll=100&amp;consumer.useFixedDelay=true"/>
            <to uri="bean:widgetBean?method=createWidget"/>
            <log message="${body}"/>
        </route>


On a failure condition, a SQL similar to below is pulled from a configuration file via keys as indicated in the route above.

        update WIDGET_CREATE
        set processed = 'F', INITIAL_FAIL_TS = NVL(INITIAL_FAIL_TS, SYSDATE),
                    FAIL_CNT = (FAIL_CNT + 1)
        where  processed = 'N' and ID = :#ID

Instead of being executed, an error parsing the SQL was reported. On review of what was passed to the database I noted that the "+" above in the query was missing. What appears to happen is that during replacement of parameters or other Camel preprocessing the "+" is simply removed for an unknown reason.  

The work-around for this problem was simply to replace the "+" with a subtraction of a negative as shown below.

        update WIDGET_CREATE
        set processed = 'F', INITIAL_FAIL_TS = NVL(INITIAL_FAIL_TS, SYSDATE),
                    FAIL_CNT = (FAIL_CNT - -1)
        where  processed = 'N' and ID = :#ID

Hope this helps someone else. 

Wednesday, September 16, 2015

Oracle - bitmap index, function based index and use of sign with decode

Just had a quick random thought on a use case I have in an integration.  What I describe here is overkill and inappropriate in my case but it seems interesting enough to describe since it could be useful someday.

The general problem is one where I need to classify data into one of two buckets where the primary processing operates on one bucket.  The classification is over a field tracking the number of times a particular condition occurs.  The mapping of that count to the buckets, right now, is a simple comparison ( x <= y).  Instead of simply embedding the comparison in the selection query and using a standard b-tree style index I decided to consider what else could be done. 

First I found a method of mapping the count that I thought was interesting:

               DECODE(SIGN(10-"CONDITION_CNT"),1,1,0)

In this example, I am mapping condition counts of 9 or less to the "1" bucket and everything else to the "0" bucket. The condition count is always >= 0 therefore the subtraction guaranteed a positive result for counts 0 to 9.  The sign func returns 1 for anything >0 and 0 for an argument of 0 and -1 for anything less than 0. The decode maps the 1 to 1 and condenses everything else to 0.  And there is the data determining the correct bucket..

After that, I was able to generate a bitmap index over that set of function calls.  A bitmap index was chosen solely due to the cardinality.

CREATE BITMAP INDEX CONDITION_CNT_IDX ON SOME_TABLE (DECODE(SIGN(10-"CONDITION_CNT"),1,1,0));

No, this doesn't really help me but I still found it interesting. If I had some spare time I would try to do some sort of performance comparison just "to know".  Unfortunately, spare time pretty much doesn't exist.  A big reason this doesn't help me right now is my dataset is not very big.  And even if it was, my integration system would be a performance holdup before query access was ever a problem. 

Hope you found this of some interest.

Scott 



Monday, September 14, 2015

Java enum usages

A major reason to use statically typed languages is the type safety; the ability to catch certain errors at compile/development time.  One of the great Java features which promotes that is enums.  These are a great replacement for things such as static final ints/strings/etc and it supports functionality well beyond straight replacement of those simple types.

You normally think of enums in context like the following;


public class EnumExample1 
{
  enum Color {RED,  YELLOW, GREEN};
 
  public static void main(String[] args) 
  {
    Color c = Color.RED;
    EnumExample1 example1 = new EnumExample1();
    example1.process(c);
  }
 
  void process(Color color)
  {
    switch(color)
    {
      case RED:  
        System.out.println("Stop");
        break;
      case YELLOW: 
        System.out.print("Caution - slow down");
        break;
      case GREEN:   
        System.out.println("Go");
        break;
    }
  }
}

That is well and good but you can do more; you can extend the enum with interfaces or even add methods to individual enum values.

/**
 * Note: Part of an example program for my kids to help them learn programming..
  
 * Interface for gaining access to the value or values assigned to a 
 * playing card.
 *
 */
public interface CardValIntfc 
{
 public abstract int[] getNumericVals();
}


/**
 * Simple representation of a playing cards "value".  
 *
 */
public enum Value implements CardValIntfc 
{
 Ace(1,11),
 Two(2),
 Three(3),
 Four(4),
 Five(5),
 Six(6),
 Seven(7),
 Eight(8),
 Nine(9),
 Ten(10),
 Jack(10),
 Queen(10),
 King(10);
 
 Value(int tmpVal)
 {
  this.val = new int[]{ tmpVal};
 }
 
 Value(int tmpVal1, int tmpVal2)
 {
  this.val = new int[]{ tmpVal1, tmpVal2};
 }
 
 @Override
 public int[] getNumericVals()
 {
  return val;
 }
 int val []; 
}

Enums are immutable but you can either put in static values as shown above or with care you can load data from properties files or even a database.

I find increased usefulness when I work with multiple application code bases and have shared code needing to process enums but the set of values differs (slightly) between the code bases. Below is description of the progressive changes from a simple enum to something more flexible.

A real example I have is where I represent various databases as distinct enum values and have a bunch of non-database specific code which simple needs an enum value to determine what database to use.  Initially just creating a single enum in the shared code worked within a single application.  A trade-off occurs when you move to multiple applications though.  The enum contains values for all  possible databases whether the application needs them all or not; as in a manufacturing app may not need to know about HR. Also, if you need to add a new database, you have to worry about mixing utility jars which may break if not consistent with the enum configuration they were originally compiled against. 

So initially, an enum representing databases might be made up of values like these:

enum Database 
{
  Finance,
  Service,
  HR,
  Manufacturing
}

To get around this; instead of using an explicit enum in the various application specific and shared API's you instead use an interface and combine that with enums defined in each application.


/**
 * A very basic interface; nearly just a "marker interface"
 *  but could provide more functionality if needed.
 */
public interface DatabaseIntfc  
{
 String getEnumName();
}

So the above might reside in a jar like, db-common-utils.jar. If you have, for example, three department type web applications then you would include that jar in each WAR file. If the source for all three web applications were in separate source trees then you likely would define a Database enum in each source tree in the same package but only include the particular items for the databases you need to reference.


enum Database implements DatabaseIntfc
{
  // Only 2 DB systems in use in this application.
  Service,  
  Manufacturing;

  @Override
  public String getEnumName() 
  {
    return this.name();
  } 
   ....
   // utility stuff
 }

Then in each applications code, you can call the shared utility code but pass the enum constant you are interested in. This might look something like:

        OurDBUtil.hasGrant(Database.HR, "all_users", "admin");

With some creativity, it is pretty easy to come up with ways to put these ideas to work in beneficial ways.

I have been focusing mainly on the benefits of using enums but there is also a sort of downside - not really with enums but care must be taken with their usage.  An example which has bitten me is:  With a big ERP system, the end-users said "here are the ONLY values (5) we will ever use for field XYZ."  What about the other 4 values which are valid for the field, I asked?  "We don't ever need to use those"  was the reply.  And in this case, it appeared that an enum was a great way to go - read field text values and directly convert into enums  or specify directly where needed and pass them around without worry of some type of textual typo occurring. This is great until someone puts unsupported values into the database which then results in things going south quickly when the code can't convert the text to a known enum value.  A simple text field may have been more resilient to errors in this case.  Another option would have been to create enums for all *possible* values and then do some semi-intelligent but certainly more graceful error handling if the unexpected values were found during runtime.

Certainly, the enum usages here could be replaced by some simple ideas such as some regular classes and maybe Spring bean (or CDI) type singletons.  What is better?  It depends.. on too many things to make a blanket statement.  Personally, I like to use enums when it makes sense but there are times when it turns out better using an alternative.  So plan on looking at application needs and other criteria to determine what is the best way for the current and possible future circumstances.


Hope you found this interesting and maybe helpful.

God Bless!
Scott




Saturday, September 12, 2015

Active Directory / Oracle - time stamp handling dilema

I recently received another last minute development request.  I'm going to be a little bit vague on some details on purpose - some things can't be shared.

The general problem to solve is:  On a particular administrative action, a user must complete a specific activity within a particular time frame.  If that activity isn't completed then some data is manipulated to force the user to complete the activity in a timely fashion.  If the activity was completed then flags are cleared regarding the condition.

On first pass through prototyping a possible solution, I recognized it isn't quite as straight forward as hoped.

In this case, we handle several pieces of information from different sources (Oracle and Active Directory). One item is an Oracle date/time (sysdate) generated by the administrative transaction.  We save that "transaction timestamp" in Oracle along with a "to be done by" date which is also stored in Oracle. At that same time, an Active Directory field is indirectly updated to the equivalent of "now". This part of the process works ok and there are no real alternatives available at this time.

At this point in the process, an integration runs which looks for administrative transactions that passed or are at the "to be done by" date and therefore should be checked against the user activity to verify they completed their activity.  This involved comparing an Oracle date/time stored in Oracle against Oracle sysdate - which works well enough and has no issue.

Next we get the user specific last transaction timestamp from Active directory which is represented in Active Directory as a "100ns increment from midnight Jan. 1, 1601".  We then normalize the Oracle transaction timestamp to the same representation as the Active Directory timestamp.  Now in a perfect world, if the Active Directory timestamp is newer than the Oracle transactions timestamp - then the user completed what was required and we clear flags and complete the transaction.Otherwise, we flag the user to complete their task.

The are 2 basic problems though; (1) there is an inherent difference between the Oracle date/time and the Active Directory timestamp - possibly due to one or more causes [activities are only semi-coordinated across servers, possible differences in available precision between Oracle date/time and AD timestamp].  (2) There can be minor differences between system times even when using NTP.

The result of these issues is that in some common circumstances, a user is determined to have completed the transaction just because of the time differences occurring because activities are serialized across different systems but using each systems time.  This I can easily see in the test data I generated.  I have not knowingly run into an issue with differences in the actual clocks in this current situation but we have had previous problems with clocks being out-of-sync.

The "cost" in this situation is significantly higher than desirable if users are determined to be "incomplete" when they actually are "complete".  On that same note, for other reasons it is in the organizations best interest to be as accurate as reasonable. 

[edit 2015/10/03]
Sourcing the time stamps only from AD isn't possible - initially I thought maybe it could be. The final solution isn't too hard to implement.  First I had to determine how close my times had to be to meet business needs.  In this case, I determined that the one use case affected would be fine with 10 seconds of accuracy.  The way I implemented that was to take the transaction time and activity time and subtract them.  If you take the absolute value of that and compare it against 10 seconds, I know whether the user met the timing requirement.  Problem solved.  If the 10 second value is externally configurable, I can easily update the behavior as business requirements change.