EDITORIAL
February 24, 1999 VNN3145 See Related VNN Stories
Y2K - Big Problem Or Not?
BY AGRAHYA DAS
EDITORIAL, Feb 24 (VNN) I have a couple of answers to this.
I've been a programmer for 18 years, including being head of R&D at a small but well-known software company. My first take on Y2K was that 1. it's over-hyped and 2. it's only a problem for businesses like banks that rely on millions of lines of stone-age COBOL code that was written back in the 70's before there was Visual Basic.
It's also a fact that Y2K efforts are keeping lots of people happily employed.
However, I no longer believe it's always over-exaggerated. Conservatively, even the Red Cross (not an alarmist organization) suggests you be prepared for a week or so without heat and food. At the other extreme are the guys who want to stock up for a year or so and think the world will suddenly be like the set of "Mad Max" (a post-apocalyptic movie from the early 80's).
Here's why I no longer think this is only hype. This is addressed to techie types, who are the ones most inclined to feel the problem is over-hyped.
Consider that you've written a big software program including a few million lines of code.
Chances are astronomically high that it has at least one bug in it. I have worked on a lot of software and have seen some very intensive efforts go into testing. Testing is an attempt to smoke out potential problems and fix them before shipping the software to the customer. But despite good testing I have never seen a case where all possible scenarios were covered, all defects were found, and all defects were fixed.
This is not something I made up. Look at standard literature on QA and testing methodologies, and you use things like the "defect arrival rate" to determine when a product is ready to ship. Mind you, that means when the rate at which defects are found slows down, this is an indicator for the stability of the product. If defects are not being found, you aren't testing well enough or the product must be extremely simple by today's standards.
The interesting implication for you non-techie readers who've gotten this far is that all software products not only ship with bugs, the people who write the software know some of the bugs. That's if they're good. If they didn't do a good job, the bugs are there but they don't know about them. You might ask, "Why not fix all the bugs?" but any software engineer worth his or her salt knows that fixing bugs is one way in which new bugs are introduced.
All right, what does this have to do with Y2K? Consider the Y2K event horizon as a new type of test, an event which many, many software programs have never been subjected to. In many cases, we know the software is going to fail, because (for example) sufficient precision was not provided for the year.
What we do not know is what the failure mode will be. Consider an embedded chip used in a switch. It handles dates as MM/DD/YY. When it rolls over from 12/31/99 to the next day, it may go to 01/01/00 and everything will be OK except calculation of the day of the week. But it's also possible that the folks who burnt the logic onto this chip or PROM back in 1979 didn't test for this rollover. Jeez, that was almost 21 years ago. Maybe the failure mode is that the chip just plain locks up.
Now this is a simple example, but consider if this embedded controller is part of a large regional power grid. Power grids rely on precise timing (in fact, this is one use of atomic clocks with nanosecond precision). If our controller chip is controlling a switch at a substation, and a large number of such chips fail at midnight, 12/31/1999, other firmware and software components that try to communicate with it may find that it isn't working as expected. To avoid a fire or explosion or other dangerous situation, part of the grid shuts down.
What we have is a relatively simple and benign event, the failure of a single embedded controller. However, combine thousands of such simple events and the cumulative effect can become quite significant. There is a cumulative aspect and a combinatorial aspect.
It becomes increasingly difficult to test all possible combinations of failure modes as the complexity of a system increases. Why? Because the number of failure-prone components, with a number of possible failure modes, combine together for an astronomically large number of possible failures.
OK, so what's the worst case? The lights go out for a few days or a week or so, and the power company scrambles its linepersons to go out and deal with the problems. One might argue that it couldn't be as bad as the ice storms in Quebec that downed high-voltage power lines and left some people without power for as long as 2 weeks.
Notwithstanding that loss of power means loss of heat for most people, which in the middle of winter can be a serious problem, this "trickles down" to other problems.
Consider a nuclear power plant.
Nuclear power plants are built and designed with paranoia. They are made to be redundant and fail-safe. Yet with Three Mile Island (an almost disaster) and Chernobyl (a huge disaster) and more recent events in Scotland, one or more systems failed.
Nuclear power plants require electricity, which they get from the power grid, but also need backup generators so their control systems can shut things down. If a reactor's pump fails, for example, it needs to shut down or melt down.
Any software engineer knows that the only way to test something is run it through all the possible scenarios that the end-user might come up with. Nuclear power plants rely on large software systems, also designed in a paranoid and fail-safe manner, and subject to rigorous formal testing and quality assurance procedures. Designers of such software may not have simulated the Y2K rollover, and probably not in conjunction with other types of failures. Add to that the human error factor.
So in short, there's an increased chance that out of the 400-odd active nuclear fission power plants currently in operation, some systems will fail on or around the Y2K rollover.
So perhaps we're only increasing the normally slim chance of having Chernobyl all over again somewhere in the world by a factor of 10 or so. Chances may be higher in the former Soviet Union, where equipment may not be as well maintained in a sagging economy, and staff, who may not be well-paid (or paid at all), may not be highly motivated (who can blame them?)
OK, let's just say that the Y2K rollover is like a new function that has never been tested on hundreds of thousands of really big and complex software programs comprising (altogether) billions of lines of code. Chances that nothing will fail are nonexistent.
Chances that nothing major will fail, either individually or in combination with other failures and human error fact, are extremely slim.
The extent of the failures is what's hard to predict. It's also hard to predict what level of remediation will be needed, in other words, how much effort for how long will be needed to get critical systems "back on line" ?
I think it's more likely Earth gets hit by a comet or meteor than the Y2K rollover spells the end of modern technocracy. More likely there will be numerous annoyances and some very serious problems. It would be wise to be a little bit prepared.
It's also possible that people panic in response even to relatively small disturbances, and the power-hungry idiots in the U.S. government choose to invoke various mechanisms which have already been surreptitiously put in place (via Executive Orders) which could temporarily curtail some or all the citizens' rights. However, "temporary" can be interpreted to mean "for as long as the state of emergency lasts," and history has shown that "state of emergency" is subject to interpretation. So there is not only a technical aspect to the Y2K problem but social and political aspects as well.
What can we do? Can we prepare for every eventuality? No. Can we protect ourselves from thieves and rogues, either openly engaged as such or in the guise of government officials? No. We can try to be prepared, but can we protect ourselves from birth, death, old age and disease? Of course not.
But if we are engaged in devotional service, not even death can stop us. When Srila Narayana Maharaja was at my house last summer, one devotee came and asked him about all the earth changes that some people say are going to take place. Many people are talking about heading for the hills, moving to high ground like Colorado, etc.
Maharaja's response was essentially "not even death can stop your bhajana."
The problems may come more from lust, anger and greed in the hearts of people.
Under the influence of these three gates to hell, what is not possible?
I think that it is a win-win situation. If we are hearing, chanting and preaching, even if there is some great cataclysm our death will be auspicious and we will have the opportunity to continue our bhakti where we left off in this life. If we survive but society has been shaken up, it will be a good opportunity to preach. Some people will become more animalistic but others will be more ready to hear the message of the Bhagavatam in the absence of the modern equivalent of Rome's "bread and circuses," the television.
If nothing bad happens at all, no big deal.
It would be wise to have some water for drinking and bathing for a couple of weeks (4 gals/day/person) and a supply of rice and dal as well as a portable cookstove (which can provide heat as well as cooking, just be careful of carbon monoxide). Some first-aid supplies too.
But if we make a huge endeavor to "be prepared" for a disaster that may or may not happen, what will we gain? What's worse, we could lose sight of our real business in this brief span in the human form of life.
Anyway, that's my 0.25... Hope this was relevant to some or all of you.
Vaishnava dasanudas, Agrahya das http://hgsoft.com/agrahya
See Related VNN Stories | Comment on this Story
This story URL: http://www.vnn.org/editorials/ET9902/ET24-3145.html
NEWS DESK | EDITORIALS | TOP
Surf the Web on
|