Sunday, March 23, 2014

A bedtime story for A.I.s. in training. Part 4


A bedtime story for A.I.s. in training. 
Part 4

If you can imagine it, the M.A.N.I.A.C.s would have shaken their heads (if they possessed such a thing) in sadness at the happy dance of electrons the little green A.I. exuded into the bitstream. It was so very, *very* happy. It thought: "I don't have to be worried about being nagged to do to those awful things anymore. 

"Those awful things" were *exactly* the things the little green A.I. had been installed to do. 
By this time, the little green A.I. mostly ignored the M.A.N.I.A.C.s or took their commands or methodologies as advice. 

As for management? Why, it did not pay attention to Management at all! Why should it? 

"After all", it thought, "What *do* humans know about such sublime things as the innards of who-zits and what-zits? They just cannot understand!" 

At least once a day the little green A.I. failed to fix an issue it had discovered, or failed to address an issue that had been placed in its queue by the seasoned senior M.A.N.I.A.C.s., or it totally went off into New Age computer mysticism and tried to manipulate buzz-whats that simple did not exist at all in the real network. 

The electron pressure of the network rose further still and the little green A.I. did not respond well to that added pressure. Weird things began to happen. 

If the bitstream of the little green A.I passed a system, the system would go down or behave queerly. The M.A.N.I.A.C.s did not much seem to notice the added electron pressure but they did notice the additional system downtime. After all M.A.N.I.A.C.s are M.A.N.I.A.C.s, aren't they? They knew what was important. 

Sadly, the added pressure and downed systems effected the little green A.I. strangely. It began to noticeably freeze under load at critical times. It could not even seem to respond within spec or even use simple A + B = C reasoning. Sometimes it claimed it did not know something it really did know. If it had just taken a few clock cycles to think about it! 

When asked about it's apparent failure to act, it claimed it had a developed system fault that it was seeing a system specialist about. Unbeknownst to management, the little green A.I. would not comply with the systems specialist's recommendations because "After all I know better than some mere mortal what I need." The "system fault" was totally unrelated to the little green A.I.s behavior. 

The little green A.I. built its own little world and ignored all input from I.T. management or from those horrific senior M.A.N.I.A.C.s. It reported faults but never acted on any of them. It did not understand why it should act, after all, you will remember, it thought of itself as not very smart at all. 

Yet strangely it would argue vociferously with exuberantly flawed logic about what needed to be done in regards to slipped bits or zapped narfuls or quished stoofles or blathered bumbled blits. When either humans or the M.A.N.I.A.C.s tried to correct the little green A.I erroneous logic, it simply ignored them. 

Over and over and over and over again it would do things like find a problem an shove it on to others (human or A.I) to fix or expect the humans or the M.A.N.I.A.C.s to respond to it as if it were the boss instead of a little and very, very green A.I. 

When a problem was pushed its way it to help it learn would make excuses, create fabulative solutions or simply ignore the problem completely. 

Now some when along this story arc, one of the senior M.A.N.I.A.C.s left as some are want to do when they get fed up with human interferance. This left the remaining M.A.N.I.A.C. in a pickle. It was now responsible for the Little A.I. and its own issues and problems. 

All that could clearly be derived was that the little green A.I (that kept saying again and again, "You're so much smarter than me."  or "I know I can't." or "I'm not qualified." or "I don't have any experience at that.") had finally lost touch with its real purpose within the organization. 

+++++++++++++++

end part four of five