Home Field Notes Operations Computers Regulation Expense Control Email Web Sites Phones Security Out Sourcing Sarbanes Oxley
Product Critical Illness Lapse Support Universal Life Burial Insurance Finite Insurance Leads Systems How To Stories Underwriting/ Claims   Using Consultants 
                    Last update May 9, 2005

FIELD NOTES

Computers

Business computing becomes more effective and less costly every year... if you let it. For most needs there is an expensive approach and an inexpensive one that will get you the same result. Quite often the inexpensive approach is also simpler, and can be completed in a fraction of the time it takes to take the costly route.Most projects can be implemented with resources already on hand and in steps (saving money as you go) rather than all at once in a big conversion. You will find a number of examples of this on this page. Consider the implementation by Bill Fleming of a complete field and home office scanning solution, in a few months, using existing computers and adding only a few thousand dollars worth of scanners. You can accomplish the same thing with a DB2 conversion and a million dollars of process control software. Not quick or cheap, but a fine system. The bias on this page is obviously for the less expensive and quicker approach.

Another example is the supposed benefit of moving from mainframe to client/server architecture for operating or maintenance systems or other applications. There are pros and cons, and if you were forming a new company today, starting from scratch, you might choose that route. The thing is, you aren't starting from scratch. You already have a mainframe doing your work, and a number of servers running your LAN, your web site, and your servers are probably talking to your mainframe. The odds are you can accomplish everything you need with your current set up.

Notes

Discussion

"Legacy systems" is a term that often has a negative connotation. It literally means systems inherited from the prior generations, including all the old style programs that run on the mainframe, written in assembler, COBOL, and other mainframe languages. The implication is that something new has come along that is better, faster, cheaper, or just cool. Replacing existing mainframe systems can be a very bad idea. That usually means changing to a system of networked PCs and servers organized as client/server. That can prove, at best, risky, time consuming, expensive and unnecessary, and at worst, leave you in a perpetual parallel, running the legacy system while you try to get the cool replacement system to do something right.

Most companies have a very capable mainframe computer and a set of "legacy" programs that work. "Legacy" doesn't necessarily mean old. The MF can be the latest and the programs written yesterday. It just means "the way we used to do it". People talk about changing to client-server systems that run on PCs and networks and the internet, and a few companies have actually replaced their old systems in this manner. It can be made to work, and I don't know of any that will admit they wish they hadn't made such a drastic move. However, the truth is that all of the advantages of the PC revolution can be obtained more quickly, and less expensively, by building on what is already in place, and creating connections between PC front end applications and the mainframe and its programs. PC machines handle web servers better than mainframes, but they don't handle the massive input/output and heavy data processing of the day to day work as well as mainframes. You need both.

Not everyone agrees that mainframe systems will be around for the foreseeable future. Even if a PC based system is the future, it might be wise to put off the conversion until the users all agree the existing system is running perfectly.

Corwin K. Zass, ASA MAAA MCA, Vice President, Actuarial Risk Consultants, Inc.:
I agree that a mainframe administration system is likely adequate in Today’s environment (assuming it is setup correctly) but not the system of the future. Today's PC's will only get faster and faster - moving to a PC server based system would allow the user community to access data at will and not rely on IT folks, setting up new plans, etc will be easier, logic changes will be done akin to visual basic programming or with even simpler Excel functions (in other words, no longer requiring a IT programmer). As an example, I know a company on a PC system that can setup a plan/test it/launch it in a week as opposed to mainframe companies which take 3 to 4 times that because it is harder to load rates and alter code to incorporate non standard administration functionality. They run their IT staff/networking staff at a per policy cost substantially less than $10 which is likely much lower than a mainframe shop.

The most important aspect of any upgrade, conversion, enhancement, etc is to ensure that you eliminate the “garbage in - garbage out” scenario. If you fix the foundation of the house you need less and less people to worry about leaks, etc... a PC based system is not the panacea but provides a much more suitable environment to improve things.

In my opinion, the main frame is the biggest reason the insurance industry (in general) is so inefficient; with the programming language of these platforms being taught less and less in school that at some point it will become a bigger problem.

Replacing your mainframe computer can often be a very good idea, even if you have plenty of capacity in the old one. The cost of your mainframe software far exceeds the cost of your hardware. The pricing of most of what you use follows the IBM model, charging you license fees based upon the MIPS (horsepower) of your mainframe. So if you replace an old 50 MIP machine with a not so old 60 MIP machine, your fees should go up, right? Wrong. IBM counts the MIPS according to whatever it wants to sell at the minute. Your new-to-you machine (it seldom pays to buy new) may well be classed in the next lower category for software costs. If your machine is 4 or 5 years old, you may find you can replace your machine, even at the same MIPS, and save enough on the lower maintenance costs to get a positive pay out over staying as your are. The same is true of your DASD. The equipment continues to improve and the maintenance continues to decrease. If you haven't solicited proposals in the last year or two, you current costs are almost certainly too high.

During a discussion of an uninterruptible power supply with Bill Fleming, he commented that the subject mainframe was old, with high power consumption.

[It would] probably be cheaper to buy a new mainframe and smaller UPS. You could probably get a break on maintenance, software charges, and see a big improvement in disk speed. Remember we had that big dual processor 50 MIP mainframe with RAID disk that took 208 Volt and a gazillion disk drives. We changed out for the H30 which is 60 MIPs with 280 Gig of disk all in one little box. It runs on 110 and I was able to run a small UPS for $800.
Our software charges dropped by about a third, our maintenance in half and saved us a lot. I think it was $330,000 over the next 3 years or something like that.

Another thing that accumulates in the computer room is equipment that is no longer in use, or is used for minor purposes that can easily be done some other way. That equipment may still be on maintenance, still using power, still using space. Walk through your computer room and ask about every piece of equipment. If the explanation doesn't sound right, ask for a proposal to get rid of equipment, or to replace it with a better way.

A relational database does not require expensive MF additions like DB2 or conversion to PC based systems. One can be created by loading formatted data from the mainframe to the file server or to individual PCs, and manipulated by off the shelf database programs such as Access. If you already have SQL Server, it is more powerful, but too expensive if you are just starting to use databases on a server. Great places to start include the policy master file and the agency database. With the size and power of current desktops, the individual user can download whatever is data needed.

A relational database can be extremely valuable for analysis, marketing, and policyholder service, but it is not a reason to spend a lot of money or do a conversion. Think of it as data arranged in tables of columns (fields) and rows (records). Data in different tables relate to one another by having fields in common. That's all there is to it. If you have used Access you have seen it work. While there is some programming necessary to create the download from the flat file the mainframe uses in a form usable in a database program (e.g. comma delimited), once created the PC data can be refreshed daily by the user, which is more than adequate for most purposes. Check with your power users to see if they have the data they need to run queries.

Here is a good background article on relational database management systems, and MS SQL Server in particular.

There are a number of ways a PC can talk to a mainframe. Probably the simplest is a screen scraper. A scraper uses the 3270 screen to populate the PC GUI screen. In effect the 3270 screen exists behind the GUI screen, and the GUI screen puts the operator input back through the 3270 screen. This allows all the action to occur on the PC by using the standard 3270 transaction. A more sophisticated approach involves interpreting the data stream and presenting it in HTML for a browser display, and converting input back to the data stream to the mainframe. If it is not possible to use a preexisting 3270 transaction, then a program will have to be written for the mainframe to execute to provide the data, and a CGI will be required. For data that does not have to be up to the minute accurate, a nightly download to the server may be more efficient than accessing the mainframe with a CGI during working hours.

Generally a CGI (common gateway interface) is thought of as a program or script that executes on a server to create output to a web page. The executable program can reside on a mainframe just as well. A web server program can also reside on the mainframe, although that is still unusual. Whether the choice is a screen scraper, data stream capture, a CGI to the mainframe, or a transfer of a database to a file server, the end result is a PC screen containing a GUI or web page which is identical to what the user would see with an pure client server system. Once data is available from a web server in HTML, it can be read with a browser anywhere, so remote access by policyholders and agents over the internet works just like local access through the LAN.

Mergers and acquisitions are bad for computer operations. They lead to the dreaded C word, conversions. Worse, if the company doesn't accomplish conversions as fast as it accomplishes acquisitions, it will find itself running multiple inherited policy maintenance systems, or even multiple operating systems on multiple mainframes in multiple locations. If you are more than one conversion behind, the chore feels insurmountable and the computer folk will start recommending new multimillion dollar solutions, sometimes hardware, more often a new administration system. The question you have to ask is "why is it easier to convert all of our inherited admin systems to this new system than it would be to convert them all (minus one) to a system we already own?".

In concept a conversion is pretty simple. You take the data from a field in the old system and put it in the corresponding field in the new. The old system and the new system have to accomplish the same things, such as bill premiums, track values, and so on, and they do it pretty much the same way. Put the data for a whole life policy into a different system and it will handle the new whole life just like it handles its own whole life policies. And there in lies the rub. That may not be the way the old system did it.

Let's ignore the sort of problems created by management, such as those demands that reports look just like those that came out of the old system. Those are artificial problems there is no point in discussing. There are plenty of real problems to deal with. The old system may have processing rules for some aspect of a particular policy that are different from those of the new system. If you know about it, you can choose whichever rule is preferable, or define a different block, keeping the two rules. You might catch most of these if someone who knows the old system intimately is working with someone who knows the new intimately, and the user looks at a lot of test runs. However, it is impossible to get them all. Generally these problems pop up later, and it is this that gives conversions such a bad name. The solution of course is not to test forever, but to make sure everyone understands, and expects, and is watching for, process rule problems after implementation.

 

Ideally, you certainly don't want to be running multiple policy administration systems. But suppose you are. This is often the result of acquisitions, and sometime of perceived inadequacies of the existing system. Back in the 70s most legacy systems couldn't handle universal life expeditiously. More recently there have been problems with variable annuities and their kin. So if you have your policies spread over several systems, do you have to face the next few years doing not much else besides conversions? Maybe not.

The major cost of multiple systems results from the user having to shift between systems and follow different input methods, and understand different output presentations. If you can standardize the user interface, you have solved 90% of the problem.

Sure, every time you make a change, your programmers have to do it in several programs, but you may find you can use combined databases to accomplish much of what you need. Mapping data into the proper fields is the easy part of a conversion. It is the differing processing rules that kill you. If you let each system process its own set of policies and combine the output, you have restricted the problem to the IT department. A reserve or cash value may be the result of different rules or calculations, but if the result pops into the appropriate field on the screen used by your policyholder service person, you really don't care which system it came from. Every accounting transaction ends up a debit or a credit to a particular account, a field in your database. It is some work to get the names the same, but it can be a lot easier to deal with the varied charts of accounts at the system level than trying to manually combine different output reports.

If you take this approach, you will probably find it much easier to download multiple databases from the MF to your servers, and do the combined presentations with queries in your database programs, such as Access or SQL Server. These are designed to handle relational databases, and the MF isn't.

To have a painless conversion, you need very detailed communication between the people handling the old system and those handling the new. The most straightforward way is to get the programmers and users who handle the old into the same room with the programmers and users who handle the new.

Ironically, it is usually the companies that have done the worst job keeping up with conversions that are the easiest to fix, because they usually are running multiple systems in the same location. They have on hand people that are familiar with the old and the new, and who are used to talking to each other and working out problems. Each team has usually gained some knowledge of how the other works.

Watch out for turf problems, or the perception among the "old" people that they have no place in the new once the conversion is completed.

There are basically two approaches to a conversion, and selecting the best one can be difficult. One is to prepare a file from the old system which matches the data arrangement of the new, while adding as many of the new processing rules as possible to the new. Then at the appointed hour the conversion is done all at once. Then the programmers work all night for a number of days just getting the merged system to run. After that is successful, they start chasing the errors in output.

The other approach is to break up the conversion into logical classes of policies and do them one class at a time. You don't move to the next class until the users are satisfied with the previous step. By breaking the process into more manageable steps, there is less chance of a disaster, and more time for learning to occur in the conversion team. The drawback to this approach is the necessity of creating bridges where the output of two different systems is needed to handle a transaction. A simple example is when a policyholder has a policy on each of the systems. Billing can be a problem, particularly if it is on paper, such as monthly direct or list billing. You don't feel that you can send two bills for one month, so you have to create a bridge to bring the data from two systems together in one bill, and when it is paid, to get the paid data split back into two systems.

If you have previously created a combined user interface, you may not have any bridges to create. You just adjust your queries for each step.

Over the years, PCs have replaced most dumb terminals, even for uses where the PC functions only as a dumb terminal. If you are still running a mix of dumb terminals and PCs, it will save you money to replace the last of the dumb terminals.

There are different views on this. Maintenance on a PC eats up time, while a terminal either works or it doesn't, at which point you throw it away.

On a pure cost basis, the dumb terminal unit has historically cost less than a new PC, but there are control unit issues, as well as the comparison of the license costs. The used and refurbished terminal still costs about half what the basic PC does, but the cost of both is so low ($400 vs $200) that the difference is irrelevant. Waiting to replace terminals until they burn out is probably a false economy. Check the numbers.

There is another factor. You probably don't really have any employees who don't need the extra facilities the PC offers. What about access to your intranet, to email, the internet?

Make sure your power users all have the fastest PCs available, replacing them at least every 18 months. People are expensive, and PCs are cheap.

This includes accountants, programmers, and persons using large databases, such as portions of the master file. Bus speed and HD speed are as important as the processor. While a 20 second wait while a spreadsheet calculates or 1 minute wait for a query may seem cost justified, it breaks concentration and discourages test runs, and so has significant hidden human cost.

There was once a time when it made sense to be stingy with hardware, but that time is long gone. And yet you still see ridiculous situations. In one company programmers making $80,000 were stuck with 5 year old PCs and significant waits on most of what they did. New equipment "wasn't in the budget".

Along those lines, anyone who prints several times a day should have a printer for those short quick prints. A good ink jet costs about $100 today, and it is easy to waste more than that in time trying to share with someone else. This works even if there is a fast laser printer fairly close by. The short jobs for the ink jet, the long ones, or ones requiring somewhat better quality, to the laser. For making a lot of copies, the supply cost will favor the laser printer for the individual, but you should first find out why so many copies are being made.

Dual monitors are mandatory for anyone who views scanned documents or looks at a mainframe screen while entering data in another screen. This will become evident if you watch the operator for a few minutes. Most people working with one screen will take notes from screen 1 before switching to the input screen or switch back and forth for every field. This is visibly inefficient and can easily waste 5 to 10% of a persons time.

When CRTs first came into common use they were so expensive that anyone that didn't need one full time shared. That was obviously a time waster, but if the wage was $5,000 a year and the terminal cost $6000, you couldn't supply one to save 10%, or even 20%, of a clerk's time. By the time Windows 98 came along with dual monitor support, the arithmetic had reversed, but few people had room for two monitors on their desk, and the thin screen monitors were prohibitive. Today wages most places run $30,000 or more for a service clerk, and you can get two thin screen 19" monitors and the extra card for less than $1000. Don't miss this one. It is too easy. Just another no-brainer that is often missed.

Power users, and probably every other user that does anything but rote work, need computer books. Some need the "Excel made simple" type, and some "Mastering Access". Your web person needs a small library. In my experience the users often don't have even one book, but are expected to learn from the help section, or from whatever someone happens to show them.

Don't leave it to the IT guys to acquire the books for the users. It never occurs to them. You have to create a central place where people can pick up what they need, and keep the book. The first step is to go to the book store and buy 20 or 30 basic books, about 2/3 of the "learn X in a weekend" variety. Then you have to push the books some. Take an Access book to someone you know is using Access on a regular basis. Make sure the web person has solid resources on Dreamweaver or Frontpage, whichever you use, and on PHP or ASPs respectively, and some good stuff on Adobe Photoshop. If you drop by and find these tomes already there, you have an unusual company.

In the absence of a very clear need, the hand-me-down computer released when the power user gets the newest and fastest should directly to the bottom of the pecking order, usually a user who either doesn't have a PC, or has one that is so old it will be retired or used only as a terminal. It should NOT go to the next senior person.

The purpose here is to avoid the chain reaction of moving computers that is costly in both technician and user time. If allowed, the newer computer is a status symbol that will cause a domino effect down the pecking order. Only active intervention by authority can overcome this basic rule of the bureaucracy, and then only sometimes. The desire will always be to improve the speed or capacity of the next slower user's machine, passing that machine to the next, and so on. By the time you are through, you will have spend more on moving than on PCs.

Computer equipment is a great place to waste money. There is the cool factor. It may be appear to be cool to have a portable PC instead of a desktop, even if it never leaves the desk (it must, just look at the pictures in an annual report). A portable costs about 3 times what a desktop does, and has a smaller everything, and isn't as convenient to use. Even if a person really travels, are they really going to lug a portable just to do email when a blackberry would be cheaper and go in a vest pocket?

Computer expense requests can tend to buffalo management as the workings can be mysterious to the non-IT manager. When I joined one company, I found the network people had sub woofers on their PCs, a $5,000 fireproof safe, a $5,000 command center desk on order, and a dandy red box that was supposed to be the "firewall". The status symbol was a $500 UPS, uninterruptible power supply, for the PCs of the more senior or favored workers. Substantial sums had been spent on a software package that purported to make a PC year 2000 compliant, even though the worst that could happen to any PC of decent vintage would require someone to reset the clock. Every computer magazine published was neatly arranged on tables in the network supervisor's office, making it look like a dentist's office. Well, maybe it isn't so hard to see what is going on.

There are pros and cons to allowing everyone on the LAN access to the internet, and reasonable personal use of email. The internet is the universal source for information, e.g. zip codes, spelling, dictionaries, value calculations and so on. Personal email is a lot quicker than personal phone calls, and leaves a trail so it is easier to prevent abuse. But like everything else, someone has to be watching the traffic.

It is pretty hard for someone to misuse the internet if all screens are visible to a passerby or from the door to the office. But what you can't see can eat up your bandwidth. Downloading music, listening to streaming audio, and belonging to email lists, to name a few. The network logs downloads and someone, usually the network supervisor, has to spot check the logs, and block off sites such as Ebay which will create undue volume and wasted time. Employees should be told that reasonable use of email is fine, but that the company is constantly reviewing emails for reasonableness. In one instance, a clerk subscribed to a number of lists received 699 non business emails in a two week period.

I am a BlackBerry convert, and frankly do not know how anyone who uses email in business gets along without one. The BB goes with you everywhere and is always on. Even when you are in your building you may be away from your desk PC. And it is the best pager in the world. Someone wants to see you or is waiting in your office, you are immediately notified by email. If you don't want to do email 24/7, just ignore it. That is what is great about email. But it never hurts to know what you are ignoring.

A BB is cheaper than a portable PC. Cost for the R957 is $400 if you hunt around on Google. The wireless service is $40 a month. I happen to use Earthlink, but I understand the other services function the same. You can get email even where your mobile phone service is weak. If you are above the first floor in your building you almost certainly will get reception. The only place in the United States where I have not been able to receive email is open road in New Mexico and west Texas. For most of that the cell phone doesn't work either.

Don't buy your BB from RIM, get it from your service provider, or even Amazon, who uses Aether. RIM seems to have given away most of their sales activity and support to the various network providers.

Until you are ready for Linux, you will be using Windows as the operating system on your PCs. Upgrading is expensive, so it is important to order new PCs with the latest release installed. If you stay with Win 98 for the sake of uniformity, you will buy again when you are forced to upgrade. The same is true of MS Office software. Whether you buy licenses or buy the PCs preloaded, choose the latest release available. This is not to say that you ought to upgrade the OS on the machines you already have. There is no bigger waste of money than updating old PCs every time MicroSoft brings out a new version.

This is a matter of opinion in most shops, because there is some convenience to have everyone on the same operating system. I think this is highly overrated. Most users will never know the difference, so at most it is a convenience for the PC maintenance person, and a small one at that. If you just buy every new PC with the current OS, it is likely the PC will be outmoded before the original OS it came with is outmoded. Only your power users and the IT types care about the OS, and those are the ones that get the new computers anyway, the power users because they need it, the IT types because they care.

Most companies that update PCs regularly are just about out of the Windows 95 machines. Rule: never upgrade all your PCs at once, and never upgrade all your operating systems at once. The same should really apply to the entire Office suite. Most people don't use Access, and the rest just haven't changed enough to matter for most purposes. The people that actually exchange files are usually the power users anyway.

Regarding Linux, here is an email conversation with my question as follows, and Bill Fleming's response to the right. Bill is a friend and Guru on just about any computer topic.

Q. I was thinking about the PCs that get handed down until they are essentially used only as terminals. Would it work to use Linux on those machines, or for anyone not heavily in need of MS Office? If feasible, might save having to have licenses for all those machine’s operating systems.

Re Bill's comment on the licenses, I reminded him that most companies cannot find proof of all the licenses they have purchased, and are thus vulnerable on that point. The real problem Bill points to is the horsepower needed to run Linux in graphical mode to support 3270 sessions. So if the goal is to run the oldest machines in that mode, the Win95 or Win98 that they already have on them is the better bet.

The operating system/server distinctions are blurred by the distribution of packages that contain both, for example Red Hat. Even Win XP pro includes a limited web server on the disk.

Bill Fleming:
I don't understand the license issue. If you have a machine with W98 on it and you already have the license then you are licensed. Were you thinking of buying a new machine without a license? You can buy a w95 license for $20 or a W98 license for $35. Yes Linux would be free.
Linux in character mode does not require much horsepower. A pentium 400 with 64 meg will run nicely. But like Windows, when using graphical X windows (Gnome or KDE) it requires just as much horsepower to be effective. For 3270 sessions you can use X3270 but it is graphical.
There is an Office look alike called Ability Office which can read and write most MS Word, Excel, and Access. It is $69 for the full package or $29 each piece. A Linux port is in test. You can download and try for free. They used to have a promo for $54. There is also "Open Office" which is available for Windows and Linux. Problem is it is not really MS compatible. It is free.

But now let's look at it from the whole picture. Put Linux on all client machines. Use Open Office, Mozilla for browser and email. Run Linux servers with NFS or SAMBA for file serving, Apache for web server, Qmail or Sendmail for mail server, LDAP for Address book, and MYSQL for SQL server. No operating system cost, no Server licenses, and no client access licenses.

 

While the intranet is for all, access to the Internet can be a problem if not restricted to certain users or regulated by some method. Work time is the obvious problem, but band width capacity is another.

If it is undesirable to restrict access, blocking certain popular sites can be effective. Someone will have to monitor usage to find the targets. Regarding capacity, employees who would never abuse access intentionally will think nothing of listening to music on the internet. A few of those and you have no band width left.

Off site storage of backup media for the mainframe and the servers is standard practice. It is a good idea to go to the storage site, unannounced, to see if last night and the night before is where it is supposed to be. If the storage site is so far away that you keep putting off sparing the time to go there, you can be pretty sure your IT people are doing the same thing.

You may find the off site is not secure, can be destroyed by the same event that gets you, or is more expensive than it should be. Backup media is a lot smaller than it used to be, so you may be paying for more space than you need. The backups might not be there. It is very inconvenient to haul backups to another location after the shift. I have seen backups kept on the primary site and only taken to off site once a week. In one instance the EDP manager was keeping them in the truck of his car. You can avoid most problems by using a fairly close site. A site that is actually used is better then one so far away it would survive a hydrogen bomb on your building.

Computers finally replaced the last of the word processing machines, which had replaced the electric typewriters. But did they? A walk around will usually reveal a number of typewriters. Make sure you are not still paying maintenance on them, along with the ones in storage. The primary function of a typewriter today is to tip you off to the presence of a bad system. Ask why.

Typewriters are just one of many tips to bad systems. Stacks of green bar and rollodexes do the same, as do employees waiting in line for anything but the ice machine. If you haven't already fought the rollodex battle (most have already passed the "printing reports on two sides of the paper" battle) here is a tip. In one agency department the rollodexes were still being updated, long after all the data was more readily available on the computer. After much argument, a small band of nerds snuck in at night and hid them all. The furor subsided, but we learned that those things are territorial, even though they don't look it.

There are a number of approaches available for printing computer output. The mainframe printer is the fastest, lowest click charge, most versatile printer. But even with the offset facility to make it easy to subdivide stacks for different recipients, it still has to be separated and delivered to the users. Network printers located at the user, even though slower, can be effective in spreading the work load, and eliminate delivery delays, as the user can take finished copy while the the job is still in process.

Watch for artificial requirements that run up the bill. A favorite is preprinted letterhead. Sure it looks some better, but today's printers can do a very good job printing the letterhead along with the content on blank paper. The recipient never knows that is not your fanciest embossed letterhead with watermarks and so on. One company that supported three company names and had users on four floors needed three printers per floor, just to load pre printed letterhead. Add the print control boxes to manage the printers and just maintaining the machines is a problem. All that was eliminated by programming that allowed the network printer on each floor to print the appropriate letterhead on black stock.

Printing material that required MICR encoding used to be considered a special challenge. Checks are the main type of document which require MICR for the banking system. Prior to the introduction of high speed laser printers that could handle bar codes, MICR was also often used to enable machine reading of the return document on direct notices. As a result, companies often had separate printing hardware containing MICR ink, or even ran MICR ink in their main printer for printing jobs that did not require it. The use of bar codes and the check printing technique detailed in the right column obviate those extra expenses.

To prepare mainframe printed checks containing the required MICR, Mike Dragoo describes the following technique:

The blank check stock is ordered with preprinted number and a matching MICR line. The preprinted number on the check is reclassified as the "inventory control number". The check number would then be the number assigned by the programs and printed on the check. We try to match the check number to the "inventory control number" so that then number we print is the same as preprinted on the check and the MICR line. If we get a jam (or the numbers don't match for any reason), we build a cross reference record that tells the computer what the real check number is for a given "inventory control number". It is easy to spot if the numbers got out of sequence by looking at the last check in the run. If they don't match, you thumb through the stack to find where the mismatch began. When you use this approach, you have to make sure that everyone uses the preprinted number when talking to the bank.

Quoting Mike above reminds me that your CIO ought to be programming. If Mike wasn't helping one of his programmers, he was programming himself. If your CIO is spending all his time in meetings, or reporting how other programmers are doing, or estimating how long something is going to take, it is hopefully your fault. I say "hopefully" because it is easy to change your approach. Ask yourself if all these meetings, reports, and estimates you are requesting are really furthering the work, or whether you might just be going through the motions of managing.

If all of this make-work is being created by your CIO, he has probably "grown" beyond day to day programming and needs to spend all his time "managing". If you think that is a good idea, you better check to see if your CFO has outgrown accounting, and your CMO, selling.

How many mainframe programmers do you need? Certainly the multiplicity of the programs you have to maintain is a factor, as is the speed at which you are introducing new products, installing new systems and acquiring new companies. One factor that doesn't matter is the size of the company. In stable operations I have had the best luck with 1 manager, the CIO, and less than 10 programmers.Most mainframe operations should be fairly stable today, as most new applications should have a front end that is web based. On the server and web side, even if you are emerging from the middle ages it is hard to see how you can keep more than 6 or 7 busy, and that includes you web designer. If you are so fortunate as to have some of those geniuses that go both ways, mainframe and server, don't count them. Keep as many of that kind as you can. Your CIO needs to be one of the "both ways" types. If all he knows is MF, or vice versa, you are not going to get an effective operation.

Most companies have many more programmers than I am suggesting. I can't guess at what is appropriate in your situation, but I can tell you that you are not going to get twice as much done with a total programming staff of 40 as you would with 20. Unfortunately, if you are so complicated you need 40, you probably get less done. I, of course, belief that there is some magic number of programmers where you can get absolutely nothing done.

In my view the size of an IT department, and particularly the number of programmers, should be a function of the number and variety of the sales distribution channels you are supporting. Everything else a life company does is, or should be, highly standardized, any variations also being a function of the distribution channels.

 

Mainframes and MF printers are the major lease or buy decisions a company makes. Generally it never makes sense for a life insurance company to lease anything. Other types of companies can be short of cash, but a life company has plenty of cash. Surplus, not cash, is the issue. A buy goes on as an asset, and if depreciated over the same period as a proposed lease, will generally match or beat the lease effect on surplus. Extend the depreciation period and the effect is even better. Purchased equipment generally has a longer functional life than the usual lease period.

Even conceding that there can be rare special deals from vendors may make a lease viable, there are still many more lease deals than make economic sense. Why is this? It has to be that lease monthly charges are easier to understand, easier to match up with monthly maintenance charges for calculations, and easier to sell to top management. A payment of $5,000 a month for 5 years sounds OK, but a cash outlay of $200,000 sounds like a lot, even though at 6% a purchase will save you about $60,000.

Sometimes there is also the misconception that you can terminate the lease if the equipment needs replacement. No, you are on for the whole 5 years. Another factor: if you buy you replace when you choose. Lease and you face a specific decision time. It is safe to say that if you have much of your equipment on lease, you have someone in the chain of management that doesn't understand this.