The end (of programming) is near!! Or is it?
We really *don’t want* to program computers using meetings and decks of slides.
I categorized a new branch of code-denial.
When I published Why we believe everybody must learn programming, the article got some very interesting reactions against teaching code to kids and adults.
We all know the argument that goes: “Kids used to grow up just fine without knowing programming, thank you. Drawing and running is great”. It generally comes from people that are fed up with the techno-utopian blind faith in technology (and most probably rightfully so) and mean well for the kids. As our common goal is for kids to grow empowered and happy, once that common ground is established, we actually can have meaningful conversations about creativity, activities and code with this group of code-doubters.
(Rarely, there’s also purely reactionary opposition: “The good old education was, well, good. Don’t spoil it with those machines.” Not much to say, here, apart that the Good Old Education never was that good for the majority of kids.)
But there is another, more intriguing argument coming from tech believers. It’s puzzling: they believe computer scientists are great and computers are mighty, and for that very reason, they posit that kids don’t need to learn computer programming, at all. Uh? How… what?
So, their rationale is well summarized in these tweets:
We believe every computer must learn the mother tongue of its users. Now, programs and computers are illiterate, not us!
Let’s computer spend years learning to write-talk! It’s what research in artificial intelligence try
That’s not the first time since we launched Coding Goûter that I’ve stumbled upon this paradoxical idea. It is summarized like this: “Artificial Intelligence will get so good that very soon we won’t need to program anything. Computers will program themselves! And we won’t need to learn those strange computer languages.”
Behind this view lay two common misconceptions about computer languages: first that programming languages are for computers, and second that structuring ideas to make them real is new to computing.
Let’s see how we can answer to these two misconceptions:
1. Programming languages are for humans
It’s a misconception that programming languages are the “native languages of computers”. Programming languages are very much human creations.
Programming languages are as much human as mathematical conventions for writing formulas are, or as geometrical conventions are. They are probably more human than some awful, incomprehensible, manually designed visualizations.
So why do programmers write programs using programming languages, really?
A big part of the answer is: for humans to read it. Sometimes this human is the programmer herself, some weeks or months later. Sometimes it’s someone pair-programming with the programmer.
Programming languages are conventional ways of expressing thoughts, intent and expected results. They are used to articulate human intentions in a structured way.
They exist in many shapes and forms, to fit many problems. And new programming languages are created every year.
And this leads us to the fact that humans have created many structured conventions, long before the invention of computers.
2. Structured communication predates computers.
Other humans understand you perfectly well, and you still need to learn and write *many* varied forms of conventionally structured documents: invoices, specifications, science papers, answers to call for projects, etc.
Architects have plans and measurements, accountants have had book-keeping formats for centuries, manufacturers have bills of materials.
As humans, we are creating all kinds of complex and structured documents for other humans, when simple descriptive text doesn’t seem enough:
Notice how even when someone creates a seemingly free-form diagram, he use protocols and conventions. Here, arrows mean that things are connected via a causal relationship. We understand that because arrows are a convention.
Of course this kind of inter-human communication can turn bad, very quickly, especially when there is too much informations to fit in one single slide:
In terms of complexity, these really are on the same level as simple programming. So we don’t even need to point to mathematical writing conventions to find examples of structured ideas, laid out for others to understand and act on them. We just have to look around at bad (and good?) PowerPoint decks.
Ultimately, programming is laying out your ideas using a convention between you and the other humans around you. It has meaning (if not purpose) even without a computer executing it.
That makes programming extremely important as a convention for expressing ideas. It’s generic and powerful. It’s a convention to express ideas that can then be run and work by themselves. And it’s a convention finding its way into all aspects of our lives.
Star Wars meet PowerPoint
It might be very true that, in a distant future, an AI would be able all by itself to compose clear plans of actions, to create texts, to structure ideas.
Let’s say an artificial intelligence understands us. Would it exempt us humans from clear communication of ideas using structured texts? Would it exempt us from designing and from solving problems? And more generally, would it exempt us of thinking?
If AI progress ultimately exempts us from structuring clearly our ideas –because it does the structuring for us– would it mean a text-less culture, as in the fictional Star Wars universe, where apparently most people are totally illiterate?
Then, it would probably mean having to talk and talk endlessly to the AIs so they can do what we want them to do…
… and that’s where the real nightmare behind the programming-via-AI dream starts.
In the future, you may very well have to schedule 5 meetings with your own AI computer assistant, spend hours creating a series of powerpoint decks it needs to understand what you want, and have dozens of back and forth conversations to iron out the details – just to explain what it has to, well, program for you. And it will of course end up with a communication mishap, because, hey, you weren’t that clear on slide 67. (Says the AI. But of course you keep insisting it’s all its fault and classic AI-programmer deflection.)
Learning Ruby to directly express your ideas doesn’t look so bad now, does it?