I’ve just come to the conclusion that the main reason we are frustrated with most of the application systems we have implemented in the last several decades, is because they are autistic. By that I don’t mean that they are hard to communicate with (although many are) I mean something a little broader.
Lest I get a call from the autism defense league or whatever, let me tell you where I’m coming from. I'm not completely insensitive to toll of autism, nor am I bleak about its outlook. We have very good friends who have an autistic son. He is my sons age (14) and we have known them since both were about 5. Sammy, the autistic child, is a high functioning autistic in that he goes to school, communicates and plays with others (including my son), and is generally a very charming young man. When we first met him he was fairly hard to communicate with, and apparently the several years before we knew him were quite a struggle. These days he is well on his way to “mainstreaming” into society. But it was something his father told me last year that was just reinforced in a book I’m reading that lead me to put these thoughts together, and connect them to application systems.
Last year Sammy’s dad told me that he could teach Sammy to tie his shoes (which he did) but when he asked him to tie his boots, or tie a package, he couldn’t do it. Boots and packages are “different.”
This weekend I was reading “The First Idea: How Symbols, Language, and Intelligence Evolved from our Primate Ancestors to Modern Humans.” It is quite an interesting thesis: language isn’t hardwired into our brains, but arises from emotional responses to our environment. It is the baby’s interactions with a caregiver well before distinguishable sounds are formed, that form the basis for concepts such as “safety,” “comfort” and “causation” and a whole host of other concepts, which we much later get around to attaching sounds and symbols to. At one point in the book the authors mention some of their work with autistic children, and state “children with autism… have difficulties making inferences.”
And the coin dropped.
I’ve been railing a lot lately about corporate information systems (and I’m including in this set those that I built or implemented over the last several decades). In particular I’d been railing about the increase in complexity of these systems. They tell me that SAP now has 35,000 tables, and therefore hundreds of thousands of attributes. Most large enterprises have many systems of that level of complexity. Each entity, attribute and relation in these systems is distinct. That is the way procedural and relational technology works. Even Object Oriented adds only limited bits of generalization. The problem, as I had been saying, is that current technology is very good at differences, and not very good at similarity. The similarities and relationships between the hundreds of thousands of attributes in a complex system or enterprise have had to be negotiated by the only two things that up to now could deal with ambiguity: end users and programmers. Neither scale very well.
And now, a much more succinct way to say this: our systems are autistic. They don’t make inferences. When we learn something in one system or one area, it doesn’t carry over to other areas.
We can deal with this now. Semantic Technologies, and in particular those based on Description Logics, offer us a way to make inferences across broad domains of systems. I’m convinced that our recurring problems will be addressed this way. The classic problem of getting a “single view of the customer” is not a technology problem (although there are still plenty of technological hurdles to overcome). It is essentially a semantic problem: what defines a “customer” and what about them is of interest that we would share in one place. One approach is to come up with a really good definition and get all our systems (and systems outside our organization) to agree and implement that. But this doesn’t work. It is too hard in general, and inappropriate in many applications, to boot. Many systems deal with users, or creditors, or agents or whatever, and it wouldn’t be appropriate to convert them (many of these systems wouldn’t work if you did convert them).
Better to come up with a definition of a customer that can be inferred from a set of properties (how about anyone who received final delivery of our product or contacted us about the technical characteristics of a shipment, as one of many possibilities). We can set up other definitions for closely related concepts such as the concept of a creditor being such as "someone who owes us money." We can also set up criteria for establishing whether two parties are likely to be one and the same. Armed with this, we can make a broad set of inferences about who our best customers are, and what kind of activity we have had with them, despite the fact that the particulars on this are scattered over a large number of systems and called many different things.
I'm quite optimistic that we can start to turn back the tide of complexity this way.
Inference: getting beyond autistic systems.