Rants, Raves, and Rhetoric v4

AI black box magic

Reading an email about C3 AI being developed to locate extremists. This nugget to start put me off.

Siebel: This is the first time we’ve done things in computer science that weren’t a mathematical certainty, other than random number generators. Everything else we’ve done in computer science, up until this day, has been deterministic. Every time you run it, it’s going to happen that you get the same answer. Now, with generative AI, you’d never quite know what the answer is going to be. And that creates some very interesting issues.

Washington wanted C3 AI to find “extremists” by Tom Krazit interviewing Tom Siebel

The IT world isn’t deterministic. We have Black Box Magic where all too often we have control over what goes in and can see what comes out, but we aren’t privy to the inner workings (code) of the thing making the decisions about what comes out. As a solutions architect, I am privy to lots of things, but lots of the things I work on are delivered as compiled code. Much of my day is figuring out why output wasn’t expected. Much of my clients’ day is becoming more superstitious about what shamanic rituals and magical thinking will result in the correct output. Only when those fail do they reach out about things that get to me.

an ancient and weathered magical black box generated by Copilot
an ancient and weathered magical black box generated by Copilot

I wonder if deterministic thinking affects AI adoption? If you have a strong need to control how something is controlled, then AI may not be for you as no one has that control. The AI companies struggle with guardrails on it because even they don’t have control. Highly regulated industries are required to have strong controls, so they might shy away from AI just to avoid huge fines. See banking, where humans giving BS answers is bad, the same BS answers from AI isn’t suddenly okay.

Comments

Leave a Reply