I recently spoke to someone who works for an energy company in the Lansing, MI area. This person had a very interesting perspective on the software the company uses and it boils down to the whole "buy versus build" debate that seems to be never ending. My take on the whole debate is, "If it already exists and is not going to break the piggy bank, buy." I will give my reasons in a moment.
The energy company that I just mentioned has one of the most head-scratching policies that I have ever heard of. It took me almost 5 hours of pondering and a 2 hour nap to figure out what logical thinking there was behind this. Basically, the energy company likes to build their own software if they have the resources to do so. However, when they buy other software packages they request only portions of those packages. This really frustrates the employees who are in the field trying to use the partial package and consistently having it crash.
On the surface, my first thoughts are, "You have got to be kidding me." However, this is no joke and nowhere near April. After scratching my head for possible reasons for the chain of events that would lead to such policies being put in place, I believe I have come up with something other than incompetent management. Most managers are generally brilliant people - they don't get put there without a reason. I'm sure there are the occassional dim bulbs, but policies always have a reason for being put in place - even for legacy reasons. This is why every company should have the policy of allowing all employees to question why a policy exists and get a sound answer without beating around the bush - even if it means digging through archives of data to find it. Companies with this policy end up being more streamlined (i.e. Bottom-line friendly) than their competitors that don't.
My best guess is that the company has a policy to make code modular when building in-house applications. Back a few years when writing modular code was a big deal, the idea was such that if modular code was written, it was reusable. From a theoretical principle, it seems logical to take that one step further and be able to reuse code to create new applications. In actuality, any code older than 1 year old is to be considered non-modular in nature. The ideal principle falls apart when maintenance teams come through the code and add quick fixes (patches). The end result of most projects after 5-7 years (before they finally get scrapped) is the biggest spaghetti code base this side of Xylandricon. Also, the idea of building a partial package of a complete package is flawed. Imagine for a moment you receive a purchase order from a company for 25% of your main software package where they specify which pieces they want. You will, of course, take their money but will most likely provide a no- to limited-support contract and, since time is money, you will not invest in a QA cycle. You will simply throw a developer at the project telling them the components to put in and 2 or 3 hours later have a build, test it quickly to make sure nothing seems to be broken, and then ship it off to the company. The company integrates the now broken application into their processes and the result is a lot of unhappy employees at the people who make purchase decisions because they are stuck using a near worthless piece of software.
The problem here is that the company also maintains their own set of programmers for their systems. They assume that what applies to their programmers applies to programmers in other companies. This assumption is false. I have discussed the importance of code ownership and by requesting only pieces of an application, some programmers feel left out because they are told, indirectly through the purchase order, that their code is worthless. Companies will, of course, quickly fill the purchase order, but won't be happy about doing it because some developers will get that "left out" feeling. So, the order will be filled professionally and sloppily at the same time. The company that builds their own software frequently is not often paying much attention to the future bottom-line costs. When building a project, many managers only see the short-term costs. So, if it appears to cost $20,000 to build the initial project, it actually costs $50,000 for the project. The extra $30,000 is maintenance of the project over its lifetime. 60% of a project's post-production cost is in intangibles consisting mostly of maintenance. So, it is, of course, no wonder why a company like Microsoft does not like maintaining Windows - it costs a small fortune to make patches and updates for the mass market.
As I have already said, my take on the whole buy vs. build debate is, "If it already exists and is not going to break the piggy bank, buy." From this story, I need to modify it a bit, "If it already exists and is not going to break the piggy bank, buy the whole package." According to the person I spoke to, policy seems to be changing sluggishly toward what I already know. Those people should be reading this if they aren't already.
The energy company that I just mentioned has one of the most head-scratching policies that I have ever heard of. It took me almost 5 hours of pondering and a 2 hour nap to figure out what logical thinking there was behind this. Basically, the energy company likes to build their own software if they have the resources to do so. However, when they buy other software packages they request only portions of those packages. This really frustrates the employees who are in the field trying to use the partial package and consistently having it crash.
On the surface, my first thoughts are, "You have got to be kidding me." However, this is no joke and nowhere near April. After scratching my head for possible reasons for the chain of events that would lead to such policies being put in place, I believe I have come up with something other than incompetent management. Most managers are generally brilliant people - they don't get put there without a reason. I'm sure there are the occassional dim bulbs, but policies always have a reason for being put in place - even for legacy reasons. This is why every company should have the policy of allowing all employees to question why a policy exists and get a sound answer without beating around the bush - even if it means digging through archives of data to find it. Companies with this policy end up being more streamlined (i.e. Bottom-line friendly) than their competitors that don't.
My best guess is that the company has a policy to make code modular when building in-house applications. Back a few years when writing modular code was a big deal, the idea was such that if modular code was written, it was reusable. From a theoretical principle, it seems logical to take that one step further and be able to reuse code to create new applications. In actuality, any code older than 1 year old is to be considered non-modular in nature. The ideal principle falls apart when maintenance teams come through the code and add quick fixes (patches). The end result of most projects after 5-7 years (before they finally get scrapped) is the biggest spaghetti code base this side of Xylandricon. Also, the idea of building a partial package of a complete package is flawed. Imagine for a moment you receive a purchase order from a company for 25% of your main software package where they specify which pieces they want. You will, of course, take their money but will most likely provide a no- to limited-support contract and, since time is money, you will not invest in a QA cycle. You will simply throw a developer at the project telling them the components to put in and 2 or 3 hours later have a build, test it quickly to make sure nothing seems to be broken, and then ship it off to the company. The company integrates the now broken application into their processes and the result is a lot of unhappy employees at the people who make purchase decisions because they are stuck using a near worthless piece of software.
The problem here is that the company also maintains their own set of programmers for their systems. They assume that what applies to their programmers applies to programmers in other companies. This assumption is false. I have discussed the importance of code ownership and by requesting only pieces of an application, some programmers feel left out because they are told, indirectly through the purchase order, that their code is worthless. Companies will, of course, quickly fill the purchase order, but won't be happy about doing it because some developers will get that "left out" feeling. So, the order will be filled professionally and sloppily at the same time. The company that builds their own software frequently is not often paying much attention to the future bottom-line costs. When building a project, many managers only see the short-term costs. So, if it appears to cost $20,000 to build the initial project, it actually costs $50,000 for the project. The extra $30,000 is maintenance of the project over its lifetime. 60% of a project's post-production cost is in intangibles consisting mostly of maintenance. So, it is, of course, no wonder why a company like Microsoft does not like maintaining Windows - it costs a small fortune to make patches and updates for the mass market.
As I have already said, my take on the whole buy vs. build debate is, "If it already exists and is not going to break the piggy bank, buy." From this story, I need to modify it a bit, "If it already exists and is not going to break the piggy bank, buy the whole package." According to the person I spoke to, policy seems to be changing sluggishly toward what I already know. Those people should be reading this if they aren't already.
Comments
Post a Comment