Thursday, May 17, 2012

MS FIM 2010 - the good and the bad

Identity management is an area which has been growing in importance for some time.  There are numerous commercial products and a few decent open source options.  This post is mainly to document the issues we encountered with MS FIM 2010.
  • Support
    • There is very little FIM knowledge available.  You should expect to use MS consultants and because of the small number that know FIM you may encounter delays.  
    • There is some online information available and I think there is now a course for FIM but there are substantial training needs just to understand the basics. 
    • Premier support and MS consulting do try to do the best they can
  • Architecture
    • It requires SQLServer
      • If you are not a SQLServer shop and are not prepared to support it then this can be a substantial issue.
    • It is complex 
      • Our implementation has various items coded in C# (by a MS consultant), setup in FIM portal and setups in FIM sync service.  I would not consider our identity needs very complex but we do have lots of information to manage.
      • Do you need high availability?  Are you prepared to implement/manage/support SQLServer clustering? 
    • Certain processes cannot run concurrently and generate various problems if they do collide.  This turns into a juggling act to handle peak times and provide a fast turn-around on additions/changes.  In the end, large implementations likely have to decide on trade-offs whether they want to or not.
    • It is a somewhat painful fit into our disaster recovery infrastructure.  The root issue may be how the licensing code works in the overall product. In the end, we are not able to do a real DR test and can only document the steps which we think are required.
  • Performance/scalability
    • If you have *lots* of users then performance can be an issue
      • Initial loading of data for hundreds of thousands to around a million users is pretty slow
      • If you experience a high rate of provisioning or updating users then you may have some unpredictably long jobs
    •  More server cores != improved performance.  Basically single threaded processing in certain areas of the application from what I can tell.
    • We have an ERP as a source which has thousands of tables in the system catalog.  FIM doesn't handle this well at all - likely trying to populate drop down lists and such.  We ended up routing the ERP data through another DB system (provide views to the target tables over links) so the size of the DB catalog didn't cause problems. 
    • The normal response to certain issues is "full sync".  This is sort of the "reboot" equivalent to OS problems.  With a large implementation, this will likely result in processing delays.
    •  
  • Stability/Quality
    • We encountered some errors early on which required hacks to get around and required substantial time for MS to provide hot fixes
    • We continue to have some issues which have no obvious source in production and have not been reproduced in non-production.  It is possible that currently unapplied updates and hot fixes may remedy the issues but we are in effect a guinea pig for some updates/fixes.

  • Ease of use.  
    • The goal of a GUI is to improve ease of use but when there are items that if selected (or not selected) at the right time can result in massive delays as it tries to process the "mistake" aspect of a request. 
    • The reporting capabilities are horrendous and MS knows it.  Maybe it will improv in 2010 R2 but I will wait and see.  If you are trying to research large number of errors you will find it beyond frustrating that you can't export a list of errors.  In the best case, you have to perform a good number of mouse clicks per entry to cut/paste what you need into Excel, etc.  In the worst cased, there are places where all we could do was perform a screen shot or manually transcribe data because you couldn't select it for cut/paste.  Error  messages are poor and misleading in some cases.
    • Validation of data is painful.  You are not given (SQLServer) schema information for FIM so if your primary identity sources are a database you cannot simply write utilities to compare sources.  We are implementing custom utilities to try and compare one of our data sources to some data that ends up in active directory.  So we are able to compare the beginning and end but cannot see intermediate results when they don't match.
  • Organization/Corporate culture
    • If scope is increased above what is initially planned and you have hard deadlines - you likely will regret choices which end up rushed and lack enough forethought.
    • If you don't have some (mostly) dedicated staff then expect a lot of unhappiness
    • If customers(users of the data) are not fully involved early on to help determine the data needs and validate a data dictionary - expect thrashing later.  

Related commercial tools:
  • http://www.oracle.com/us/products/middleware/identity-management/overview/index.html
  •  http://www.quest.com/identity-management/
  •  http://www.bmc.com/products/offering/Identity-Management.html
  •  http://www-01.ibm.com/software/tivoli/products/identity-mgr/
  •  https://www.pingidentity.com/products/pingone/
  •  http://www.openiam.com
  •  
Related open source tools:
  • http://www.forgerock.org/
  • http://developers.sun.com/identity/
  • http://shibboleth.net/
  • http://www.jasig.org/cas
  • http://www.josso.org
  • http://shiro.apache.org/
  • http://static.springsource.org/spring-security/site/
  • http://openid.net/
  • http://www.sourceid.org/
As a note to self, I should keep an eye on the items at forgerock.org and maybe consider a proof of concept in case FIM doesn't work out.



Monday, May 14, 2012

Gardening - The Hoss

I like to garden and my family certainly likes the fresh vegetables.  Our garden has been steadily growing in size and is around 25 x 50 feet now.  That may sound like a good size but it never seems large enough as we find new things to plant.  A downside to the garden is there is usually a point in the summer where we can't weed for lack of time and we typically don't keep up with the weeds as well as we would like.  The result is usually a amazon like environment covered in morning glory and many other annoying plants.  To help combat the weeds and also make better use of our existing garden space, I purchased a Hoss double wheel cultivator/hoe.

http://easydigging.com/Garden_Cultivator/wheel_hoe_push_plow.html

I have had it for about a week and am quite pleased.  It is lighter weight than I expected but with care should last a long time.  I tried out the cultivator teeth and oscillating hoe so far - will try out the sweeps another day.  The cultivator teeth work reasonable well in our clay soil (at least where we have removed most of the rocks from over the years).  The hoe worked well when I tried it (a day or so after rain).  It seems that with regular use they should keep the weeds down in the main aisles and is MUCH faster than a plain hand held hoe.  My comment related to improved use of space is due to the fact we can put an extra row between existing rows (resulting in a row spacing of about 18 inches).  Previously we had rows spaced about 36 inches apart to allow the tiller to run between them.

Another benefit of the Hoss is some good physical exercise - something sorely lacking in my day job.

All in all this was a very good purchase and I wish I had done it sooner.  I may consider the seeder attachment and furrowing/hilling plow next summer.  The plow will depend on how our first attempt at potatoes turns out this summer.  The seeder is not really a "need" but may prevent some unneeded extra back pain - will have to see what other bills crop up next summer. 




Tuesday, May 8, 2012

Jenkins CI SQLTool Integration idea

I find that Jenkins works well for simple batch tasks but it would certainly be nice if there was an integration with the HSQL SQLTool so that a "SQLTool" job could be defined and maybe it could get connections from JNDI.  Instead of writing Groovy code to execute queries, we could then define the job as a SQLTool compatible script (either define script directly in Jenkins or specify file).  This would cover a good number of our utility batch jobs.

No time at the moment though.  Someday.