In the computer science profession there’s a bit of a truism. For four years your college indoctrinates you never to copy anybody else’s code. Do your own work! Then you hit the real world and the first thing you do is… copy everyone else’s code.
Working code is the gold standard in the software industry. And while there are signatures of code that you can often use to detect the author, the reality is that there are only so many ways to translate the same basic algorithm into working code. The truism above is a bit overblown. But it also recognizes a simple reality in our profession: working code proliferates.
Quite a lot of code is written using example code, a segment of a co-worker’s code, an online tutorial, or some other similar snippet as a working model. When we want to do something in our software, quite often one of the first things we do is seek out some example of actual working code… and then use it as a base. Is that plagiarism? Your college would say yes, and an undergraduate could find himself failing a course (or worse) for such an offense. Out in the real world it’s basically expected. Headaches often ensue if the example code has bugs in it. But often it’s the fastest way to get to something that works.
Don’t get me wrong – out and out plagiarism of source code is a copyright violation. It’s straight up illegal, and very few developers actually do that. What I’m describing is a substantially lesser “sin.”
But is it even a sin?
Consider two scenarios: in scenario (A) I use example code from a hypothetical source to get my own code working. In scenario (B) I fight through the (often poor) documentation to get to working code on my own. Given my coding idiosyncrasies, would anybody be able to tell the difference between code I wrote in scenario (A) and the code I wrote in scenario (B)? Sometimes yes. But far more often the answer is no.
Frankly, most technical fields operate this way. Finding the right answer is what matters, and in many cases there is only one correct answer. Engineering often works this way. Ditto architecture and drafting. If getting the answer right is the primary concern you can basically expect this kind of pattern.
The simple reality is this: outside of a few specialized fields, nobody actually much cares about plagiarism. Authors, musicians, filmmakers, and photographers care, of course. In creative fields, plagiarism is a life and death matter. Journalists care. Academics care. Notice anything that links these fields together? In every case it’s important for the original author to receive credit.
The Melania Trump case is a boring one. Journalists, academics, and creative types care. Evidently they care a lot, if my social media feeds are any indication. Nobody else cares much at all – and frankly, they shouldn’t. Our politicians should be plagiarizing people. I don’t care one bit how original the ideas of any of my elected leaders are. I care that they have good ideas. I want them to plagiarize – for the same reason that I want software engineers to look at working code. I want ideas that work. Original ideas seldom do.
At this point somebody is going to bring up the obligatory example of Joe Biden. Biden, for those not yet up to speed, withdrew from the 1988 presidential campaign over plagiarism. Fair point. Let’s discuss it. I have three responses.
Does this hurt the Trump campaign? Definitely. But only among those who mostly weren’t voting for him anyway. Nobody else cares.
What I’d like to hear more of — and have failed to see so far in any of these essays — is a coherent theory of why we bother to teach any writers at all. It seems to me that we need to know that before we can decide whether Shakespeare is one of the writers we ought to teach, or whether we ought to give up on the project entirely and just let the students spend their time watching YouTube videos, or reading Shakespeare, as they please.
It’s a fair question. Sadly, it’s also a question with a rather obvious answer. Even more sadly, the answer is so obvious that previous generations internalized it too well. As a result, they did a very poor job of teaching us the answer. As usual, our generation has to learn it all over again, unable to learn from the mistakes of the past.
And that, in a nutshell, is the answer. As Nassim Taleb points out repeatedly in his various works, an older book that is still widely read is more likely to have real truth in it than a widely read modern book. The classics are valuable not because they are old but because they have withstood the test of time.
Of course, the list of “classics” is not immutable. It changes over time. But the longer that a work has been on that list, the longer that it’s continued to be read, the more likely it is that the truths contained therein are universal rather than specific.
There’s a reason that some of our older classics never go stale. Shakespeare is a slog, no doubt about it. I’m a voracious reader with an oversized IQ and a master’s degree level education. To put it bluntly, I still have to work at it to read Shakespeare. I don’t fill my entertainment hours with nothing but Shakespeare because it’s too much work. But I do continue to read the bard because the bard speaks to the truth of the human condition – not the truth of the early twenty-first century American condition.No, we won’t absorb every bit of this truth on every reading – and certainly not on a single reading in our teenage years. But that was never the point. The point is to ensure that the future is at least exposed to it, so that when the need for that truth arises they know where to find it. That’s why our children should still be reading Shakespeare.