Trusting a black box

Steven Johnson wrote in Everything Bad Is Good For You about how in video games we have to figure out the rules of the built world. We are not just exploring a virtual space but build a mental map of cause and effect.

The humor of memes about video games having prepared one when finding something random that looks like a glitch in the real world reflects this mental map concept.

Anything built without our controlling the rules works this way. Say I have a car that estimates the range. It says I have 11 miles before it runs out of gas and the fans are on full, so I see the miles dropping faster than they should. I come to doubt really have 11 miles and the gas station I can get $.40 off is 7 miles away. I might get there and I might not. So, I put a gallon in it. The range doesn’t budge.

Do I still have 11 miles? Surely I have more, but how do I know that I do? Can I trust it?

Opaque rules impair causation. See, the whole point of the tool is to allow me to predict when to take action. More gasoline SHOULD cause more actual range which should cause the gauge to show more range. Filling up the tank soon after did show the max range like it should. This event eroded my trust, which makes me worry about whether I can trust the gauge even when it does show there is plenty of gas.

P.S. the gas gauge did not move either.

Black Box Magic

black boxes ttv

With a black box system a person working with it sees what goes in and what comes out. The machine’s decision making process is obfuscated. Theories are made based on incomplete evidence on the behavior. More data points on more situations confirming the behavior is my way of being more comfortable the theory is correct. Sometimes we lack the time or conscientiousness or even access to ensure the theory is correct. This leads to magical thinking like labeling the software in human-like terms, especially insane or stupid or seeking revenge.

With a white box system, a person working with it can see the machine’s logic used to make decisions. Theories can be made based on more complete evidence due to investigating the code to see what it is intended to do. The evidence is far more direct than testing more.

Systems today are so complex they tend to have many parts interacting with each other. Some will be of each type.

Then there are Application Programming Interfaces (APIs) which expose vendor supported methods to interact with a black box by disclosing how they works.

Proprietary systems tend towards a black box model from the perspective of clients. This black box philosophy depends on the experts, employees of the company, design the system so it works well and resolve the issues with it. So there is no need for clients to know what it is doing. Where the idea breaks down is clients who run the systems need to understand how it works to solve problems themselves. Sure the company helps. However, the client will want to achieve expertise to manage minor and moderate issues as much as possible. They want to involve the vendor as little as reasonably possible. Communities arise because peers have solved the client issues and getting an answer out of the vendor is either formulaic, inaccurate company line, or suspect. Peers become the best way to get answers.

Open source systems tend toward a white box model from the perspective of clients. This white box philosophy depends on clients to take initiative figuring out issues and solutions to resolve them. Clients become the experts who design the system so it works well. Where the idea breaks down is some clients just want something that works and not to have to solve the problems themselves. Sure the open source community helps. Companies have arisen to take the role of the vendor for proprietary systems to give CIOs “someone to yell at about the product”. Someone else is better to blame than myself.

Cases of both the black and the white box will be present in either model. That is actually okay. Anyone can manage both. Really it is about personal preference.

I prefer open source. But that is only because I love to research how things work, engage experts, and the feel of dopamine when I get close to solving an issue. My personality is geared towards it. My career is based around running web services in higher education. Running something is going to be my preference. (Bosses should take note that when I say not to run something, this means it is so bad I would risk being obsolete than run it.)

This post came about by discussing how to help our analysts better understand how to work with our systems. It is hard to figure out how to fix something when you cannot look at the problem, the data about the problem, or do anything to fix it. So a thought was to give our analysts more access to test systems so they get these experiences solving problems.

Photo credit: black boxes ttv from Adam Graham at Flickr.