In my previous article for the eDJ Group I left you, my intrepid readers, with a challenge:
. . . please do me a favor: look around at the technology you use in your work and in your everyday life (computers and toasters alike) and ask yourself, “Could I explain to someone else exactly how this works?”
Had time to think about it?
OK, now see if you can explain how the devices you use in your everyday life work: your computer . . . your car . . . your toaster. Seriously, can you explain how your toaster works? Can you explain it without using wooly mammoths in the explanation? I know I can’t.
My point? Our modern lives are filled with black boxes, things that we understand in terms of the inputs they require (click the mouse, turn the wheel, insert slice of bread) and the outputs we receive (your computer beeps, your car turns, you get toast!). Yet between the input and the output there are a whole bunch of things happening that we can’t see, can’t explain, and – most importantly – don’t actually need to explain to accomplish our desired task. As long as the inputs are understandable and the outputs are what we expect, what lies in between can be completely opaque. I don’t need to know how my toaster works, as long as I get my toast.
So why is the fact that machine learning (a/k/a “predictive coding”) is a black box such a problem?
Categories:
Analytics
Technology Assisted Review