Behavioral economics fascinates me. Humans have amazing abilities to miscalculate risk with extreme confidence they accurately assessed it. These appear to be rules of thumb which work in certain situations, but really are not applicable to others yet most people do.
Part of the problem gauging risk, I think, comes from a lack of consequences in low risk situations. Switching from writing a script to answering an email and back while sitting at my desk is extremely low physical risk. Switching back-and-forth between driving and answering a text message can seem like no big deal when even 23x more likely to have an accident is still one in thousands. A lack of having an accident or close call while driving is seen as evidence of the ability to text and drive without a problem. (After all how risky is it operating a car of several hundred pounds?)
Following the causal chain of events presents us with problems. We sometimes pick the wrong causes. We then are more likely to pick that wrong cause over and over. Logic and science are tools invented to combat these problems. Testing the idea with large samples eliminate variation as a confound. Others testing with the same or slightly different experimental designs point out the relevant scope.
“Garbage in; garbage out” can also trip us. We poorly assess the reliability of inputs from illusions I discussed earlier, so the calculations based on garbage were never going to be good anyway.
Strangely enough slowing the process down and thinking about it from many different angles can even exacerbate the problem as we get mired in so much data or processes we cannot make a decision.
Technology helps us do the same calculating just faster. Some helps us validate the outputs. I look forward to technologies that help us identify the correct inputs. My big beef with predictive analytics is doubt the correct inputs are being identified, so the outputs might have lots of garbage.