2017-02-23

An AI will eat my job?

One more brief interlude before I complete my series on Regionalized Proportionality as it might apply provincially to British Columbia.

This paper, as mentioned in a current slashdot post, and savaged in various ways in the comments would seem to suggest that an AI will eat my job, as a dessert following the assembly line jobs of bygone days (whose former holders are inclined to blame globalization, wrongly) and the main course under current consideration in the transport jobs that google and others are working at destroying even as we speak. Of course, the news reports aggregated with the original paper's link at slashdot, MSPowerUser and New Scientist, while interest- and attention-grabbing aren't really known for sober consideration.

Of course there's the autonomic adrenaline rush when one sees anything that may threaten one's livelihood but some of the trains of thought in slashdot's comments pointed one direction for comfort straightaway. I didn't stick around long enough to see what other comfort might be shed in the cacophony but a few other things quickly came to mind:
  1. As was pointed out by several on slashdot, now you'll just have to find people who can write specifications that the AI can understand and turn into programs. Hmm... That sounds an awful lot like programming, or rather meta-programming, I suppose. Just not the kind of Meta-Programming that Abrahams and Gurtovoy had in mind. The paper itself gives examples of what this meta-code might look like. There's a Domain-specific Language (DSL) there that reminds me, in some ways, of any highly abstracted programming language.
  2. How about user interfaces? My first thought after I turned away from the cacophony was that the space between a computer and human users is going to be harder to specify well, therefore not going to go away very soon, either.
  3. And then there's computer to machine interfaces. You're going to have to get so specific about the specifications at that end, that it's not clear to me this, at least, is ever going to go away, no matter how good the AI becomes.
The examples of what "DeepCoder", the program developed for the paper, can do, may be deep for a computer to have performed, but it's only in the nature of a very primitive moving around of some simple data. I expect the DSL would have to get prohibitively expressive to be particularly useful to the larger problem at hand, so until you develop the AI that can generate a true and clear expression of what it takes to get the problem solved, I see a lot of work that will still require a human for a long time. (I hope the reader can see a DeepJoker pattern in "until you develop...") Time will tell where this will all lead but in its current state it rather reminds me of the distance between how long it takes for Principia Mathematica to prove that one plus one equals two, and how relevant that is for the elementary pupil cutting their baby teeth on the first pages of Arithmetic. In one sense, it was important work for somebody to have done once, and we are grateful for the work Russell and Whitehead put in to achieve it. But in the sense most commonly employed by the vast majority of humanity, it has no relevance whatsoever.

But there's a cavalier attitude in the tone of the coverage of this paper (which is actually quite interesting) which implies that software developers are simple coders who take specifications and crank out code without further thought. If that were always the case, I wouldn't still be keen to continue the craft. The kind of problem being solved by DeepCoder in the paper forms only the smallest piece of the kinds of things I've done in my career. At the very least, there's the problem of making the inputs on a particular occasion that an algorithm was executed endure for the next time (persistence/serialization), and then retrieving them for that next time (deserialization, data validation), sometimes heavily constrained by choices already made by surrounding technology (adaptation). And when people (as mediated through UI devices or signals from other programs) start to affect what the program should do, that introduces another set of problems (interaction, more data validation). Then when you're solving a problem for which there aren't a lot of code snippets out there already, you'll still need to write some of it yourself, like the first code that calculates colours under PDF's Transparency model to mention one instance -- and chances are, the first code won't be good enough for all cases (non-trivial algorithms). I could go on but the themes repeat (as one already has) at the same time as new ones appear. There's a lot of work that still needs to be done before DeepCoder writes any simple app that a human might find useful.

I think I won't starve very soon, whatever my adrenaline thinks.

No comments: