Most of us are familiar with Edmond Rostand’s play, Cyrano de Bergerac, a poetic play in which one man’s true identity is kept hidden behind another’s. I’m paraphrasing quite a bit, but the 19th century play has been performed and adapted almost continuously since its debut, finding great success on the silver screen in Steve Martin’s classic modernization in Roxanne and the gender-flipped version in The Truth About Cats & Dogs, and the small screen, showing up in venues as diverse as The Brady Bunch and Bob’s Burgers.

The story follows this general line (with apologies, to folks who want a better synopsis – and to my high school French teachers): A person who can’t express themselves well is aided by one who can, all to woo another, and hilarity ensues when the person providing the wooing words to the wooer has to come to terms with their own feelings for the woo-ee.

I bring this up because recent advances in technology have me concerned that we’re coming into an age of a cybernetic Cyranos, and it has implications we’re not going to fully understand until long after these new tools have been used for some time and become embedded in our culture and lives. The tools I speak of (as you may guess from my title) are the chatbots powered by generative “artificial intelligence” (AI)  algorithms, such as ChatGPT and Bing.

My career has been in higher education, and over the years I have witnessed technological attempts to improve the response to plagiarism and other forms of academic misconduct. Back in the day, it was easier than it is today to, say, get a term paper from someone geographically distant and present it as one’s own work locally. Cheaters could cheat, and could do so more easily, because, let’s face it: nothing was connected. Eventually, though, the internet came along and ruined everything. For starters, it supersized the market for, shall we say, papers on demand. But, at the same time, we started to see more and more work being submitted digitally, which allowed software tools to make some of the verification process an easier, more automagic one. 

Today, plagiarism software is a big business, with enterprise-level tools deployed as a normal part of course management software. It’s not just for academia – businesses deploy these tools, and individuals using tools like Grammarly can even do a self-check before submitting their work or school projects. Even lawmakers are turning to these tools.

In less than half a year, though, the new generation of AI tools has been making headlines for the “human-like” responses created when given a prompt, to, say, write a poem or do a report on a specific subject. These new chatbots have been harvesting data from the internet, and using heuristic algorithms to catalog word-use patterns to which they then compare their own outputs, creating a feedback loop of generative critique to then reshape the response to resemble human output.

Now, I’m not a computer scientist, but that’s how I understand it. Point is, we now have machines that are capable of developing iterative understandings of patterns at a blazing speed, and just as quickly parrot those patterns. The results, I will admit, are impressive. 

Sometimes scary. Not always in a good way, either… 

A few months ago, the trustees of a public university were listening to a presentation from an administrator about the ongoing process of developing a new strategic plan for the campus, and he began by saying that someone in the office thought it would be fun to ask a chatbot to write a strategic plan for the university. “Actually,” he said, “it turned out pretty good.”

There is a great deal of faith that such documents will be written and vetted by the humans hired for those jobs…and yet, across the field of journalism, we have, in fact, seen some human reportage jobs trickling away while these systems are beginning to be engaged. 

Which brings me back to my career. I’m what is referred to as “a creative” – that is, I create content in audio and video media…and I work in higher education. For that first distinction, think about the implications if a machine could just do what I do… In fact, this reminds me of a joke told by Woody Allen in his days as a standup comedian: 

My father worked for the same firm for 12 years. They fired him and replaced him with a tiny gadget that does everything my father does, only much better. The depressing thing is my mother ran out and bought one.

For that matter, what about the folks who write strategic plans? I’ve heard from a pretty good source that a chatbot already does a good job with that task…

Joking aside, my side of these ivy-covered walls is the least of it: the implications on the teaching and learning mission of any college is being challenged by these technologies. Task forces and committees are springing up on campuses around the world to explore a very basic question: what in hell is being unleashed upon us? The implications for educators and for students are all immediately transformative. We are, today, now in a world where a person can create a term paper by robotic proxy.

Fortunately, many of these term papers are going to have flaws. My own experience dabbling with one product, ChatGPT, has shown that the machine has an easy relationship with lying, often creating, let’s call them “alternative facts,” and presenting them as if they belong in our universe of truth. You can even get them with citations…to non-publications written by non people.

Like I said earlier: the results are impressive. Sometimes scary.

If I were to write an essay and submit it to my job or for a class with fabricated “facts” and imagined sources, I would be called a lying liar. The eggheads who built these systems prefer not to say that these bots are lying in their responses, and I get it – lying is a harsh, very negative term. One could try to fancy it up by using a longer word that most folks probably don’t know means the same thing, like “prevarication” but even that’s too harsh. Because it’s true. Instead, they insist their systems are “hallucinating.”

Great…the bots are tripping and these works are just their fever dreams…

Now, even with the caveat that I know it is not right and proper to paint all students with the same big brush, it is not unthinkable that a student on a deadline could use a tool like this to create a quick draft of an essay using a real-language prompt…and not go much beyond the draft, since, hey, it is pretty good…and it’s already 2am and the essay is due at 8. It’s going to take some time for the anti-plagiarism systems to develop heuristics to learn to detect algorithmic output over human expression.

Quite frankly, it’s got a good chunk of the ivory tower types with their undies bunched up to their necks.

Let us be fair, here, and remind ourselves we can’t just point at students as a cause to worry about potential technology abuse – it can happen on the other side of the equation, too. It is eerily possible to imagine this scenario: an instructor uses a chatbot to write a lecture. You can even imagine using other AI tools to create a deep fake video for his online class to view.

I say this with such certainty because a faculty member at the Wharton School of Business who teaches innovation and entrepreneurship did just that. Not to cheat the system, but to demonstrate the breadth of opportunities these generative AI tools allow.

Which brings me back around to Rostand’s creation, you know, the guy with the big nose and great way with words, Cyrano de Bergerac. He was the unknown source for the voice of another. With the penetration of (anti)social media into almost every aspect of our daily lives, who is to say we’re dealing with the people we think we are?

Should this matter? Well, we are still struggling as a nation to cope with the impact of social media on a wide range of cultural and political issues, up to and including elections for the highest offices in the land. Who – or what – is generating the words (and images and sounds) that go viral, influencing others with propaganda? Who – or what – is generating the body of knowledge used to teach, and adding to it in academic response?

It is very likely that not only is the proverbial horse long out of the barn, and we are well beyond any hope of closing the barn doors, but that horse has met with others and is actively breeding.

Pay attention to this story, as it’s only just beginning.