The Complexity Factor

In my previous post, I talked about the fact that no software package has ever been released “glitch free.”  The bigger the project, the more likely there are to be glitches.   One of the factors that was pointed out to me (and has been in the news) is the complexity of the system. has to coordinate information from numerous systems, as both in sending it and receiving it.  In fact, most of the problems seem be related to the “data hub”, which is the main interface handling this.  It’s job is (approximately) to take data from the web site, query other systems to verify things, then send it on to the various exchanges or state agencies to get an answer from them about pricing, and then back to the customer.  Sounds simple?  It isn’t.  Leaving aside the reports of … ineptitude … on the part of the contractor responsible, it’s an epicenter for glitches to happen.

Almost two decades ago, I was a system administrator at a healthcare provider.  It was a fairly large one, consisting of three hospitals and numerous clinics scattered around the state.  We were also one of the first to try out a fairly new idea:  The electronic medical record.  There were all sorts of promise in that, particularly given the nature of our institution.   The idea was wonderful – in fact, you still hear a lot about it.  Back then, it was just beginning, but still, the promise was there.  We could create one record, accessible at any of our clinics or hospitals, enabling up-to-date information to be available and saving a bundle on sending paper records around.

That was the promise, but the reality?  Something else entirely.  Leaving aside the fact that at the time “computer literacy” was not a common skill, particularly in doctors, the idea that we could link various clinical systems into a single medical record system ran into brick wall after brick wall.  Consider the “core systems” I was responsible for:  The dictation system, the transcription software, and the electronic signature package.  My systems had to accept data feeds from the scheduling system, pathology, emergency, radiology, and cardiology departments.  All of whom had their own “best of breed” systems.  My systems fed data back to them, as well as into the electronic medical record.  Sounds simple?  It wasn’t.  I learned to loathe the phrase “Yes, we do HL7 messaging!”  Why?  HL7 is a standard messaging protocol, but it sometimes wasn’t “native” to the particular package, or the “dialect” (the version) it “spoke” wasn’t the same that we did.  Any combination of which could (and did) trip you up, and lead to errors happening.  Which was why we had regular meetings of all the system administrators, to try to iron out those issues, and develop testing plans.

That’s why the “big savings” and “great idea” turned into a huge morass of delays and cost overruns.  But here’s the other major headache we all had.  It wasn’t enough to iron out the translations, and get the systems to talk to each other.  The systems all had be working at the same time.  If the pathology system was off-line for some reason, I would have a massive backlog of files waiting to be sent, which could cause problems in other systems, because they could be “holding up the line” for the files meant for them.  The information needed to complete new files wasn’t there, so even more delays were created.  An interface glitch in the medical record package could (and did) mean a huge backlog building up.  A freeze in the signature system could mean a big backlog, and then stress on the network as the backlog was being cleared.  That was just on my systems, the other systems administrators had their own sets of headaches along those lines.

I mention this because this is what is happening in the data hub, on a much larger scale.  There are multiple systems which have to send information back and forth to each other through the hub.  They’re all running different software and hardware,  and they all have to be working at the same time, which wasn’t necessary before.  So there are two potential “points of failure” right there.  It’s complex, and it requires a lot of coordination.  Apparently, that wasn’t the case, and there were also some problems with the contractor’s coding of it.  Although there’s a figure of “500 million lines of code,” most computer experts scoff at that.  But it’s still a complex system, and anyone who expected it to work “perfectly” out of the box was just fooling themselves.  No “adding another server,” as Chuck Todd suggested wouldn’t have fixed it.

The good news, such as it is, is that the “problem child” has been identified, and people are now working on it.  It may take a while, it may even mean a “rewrite from scratch,” for parts, but it’s being worked on.  It’ll get better.  The problem, as one commenter over at Little Green Footballs pointed out?  When it’s fixed, nobody will notice it for a while, because it “just works.”



Filed under Politics, Technology

6 responses to “The Complexity Factor

  1. I’ve always thought the problem with software was that you couldn’t SEE it, and the incredible complexity of it all. It makes it so hard for people to appreciate the scope of what they tried to do. Technology is hard. I’ve always told people “Don’t be upset when it breaks…be amazed that it works at all.”

    (And thanks for the postback…)

    • You’re welcome. 😀

      In my programming days, many years ago, the first assignment I had was to develop a usable data entry interface for a COBOL data analysis program. Which previously had been done with a card reader. The analysis program turned out to be 15K+ of “spaghetti code.” After a few weeks of trying to figure out just what could be done, I punted and rewrote the program from scratch using another language. It was smaller, faster, and usable. Except that my then boss kept wanting “little changes.” Which were “little” on paper, but in software terms meant “tearing apart the program and putting it back together again. Two years later, I had to rewrite from scratch again to make the massive kludge that had developed a cleaner and easier to update program. So I’m always surprised when something “more or less works” when it comes to software. 😆

  2. Thanks, Norbrook. I am enjoying the tech discussions immensely; it is not often one can talk about “lines of code” online. 🙂 Chuck Todd’s “just add a server” comment will be put in the Tech Clueless Hall of Fame right up there with Ted Steven’s “the Internet is a series of tubes” comment.

    It was particularly amusing to read the stories today that are telling us that the president was not in the weeds on the tech roll-out. Sigh. “Benghaziiii! IRSSSSSS!! Website Coooode!!!”

    • He was probably thinking (I know 🙄 ) of mirroring, which a lot of major providers use to handle peak – that is, there’s a number of companies who provide extra bandwidth/web services when your main one is getting too much traffic.

  3. Snoring Dog Studio

    Thank you, Norbrook. The average person, and the average fool in Congress, doesn’t know how complex these things are. Most people don’t make stupid proclamations about it, though – just those who have hated our President and Obamacare the entire time. “Add a server”! Hilarious!

  4. Dancer

    Ditto to all previous well-stated comments. Getting tired of constantly having steam come out of my head over the ridiculous oversimplification that drives all commentary from our clueless congress people AND our lazy, sucky media! The Repug goal of dumbing down all citizens to a place where they can just feed them crap is working all too well. Thank you for your attempts to educate…we pass them on. Sorry for the harsh language…it’s WORSE in my head…