Those Schools Banning Access To Generative AI ChatGPT Are Not Going To Move The Needle And Are Missing The Boat, Says AI Ethics And AI Law

Attempts to ban generative AI such as ChatGPT are not all they are cracked up to be.

getty

To ban, or not to ban, that is the question.

I would guess that if Shakespeare were around nowadays, he might have said something like that about the recent efforts to ban the use of a type of AI known as Generative AI, which is especially exemplified and popularized due to an AI app called ChatGPT.

Here’s the deal.

Some high-profile entities have been attempting to ban the use of ChatGPT.

For example, the New York City (NYC) Department of Education recently announced that they were proceeding to block access to ChatGPT on its various networks and connected devices. The reported rationale for the ban consisted of indications that this AI app and the overall use of generative AI seemingly portend negative consequences for student learning. Students that opt to use ChatGPT are said to be undercutting the development of their crucial critical-thinking skills and undermining the growth of their problem-solving abilities.

On top of those rather stoutly worrisome qualms, there is the undisputed fact that such AI can produce inaccurate outputs that contain errors and other factual maladies. That’s bad. The dangerous icing on the cake is the imagined possibility that the outputs could potentially be used in an unsafe manner by students that unknowingly rely upon said falsehoods. No such documented harms have yet surfaced that I’ve seen, so we’ll need to just take at face value that this could potentially happen (I have discussed the range of possibilities in my postings; for example, some have posited that generative AI essays could tell someone to take medicines that they should not be taking or provide mental health advice that ought to be proffered by human mental health professionals, etc.).

In today’s column, I’ll examine the nature of the recently decreed bans and identify whether they make sense or not. There are a lot of questions to consider. Do such bans do any good? Are these bans enforceable? If more such bans arise, will we be aiding humankind or will we inadvertently shoot our own foot?

As you can likely guess, none of this is quite as cut and dried as it might seem on the surface.

Into all of this comes a slew of AI Ethics and AI Law considerations. Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The notion of banning some types of AI is not a new conception.

In one of my columns, I closely analyzed the proposed bans associated with the use of AI for autonomous weapons systems, see the link here. Various countries are doing weapons development that encompasses onboard AI. It is the proverbial fire-and-forget kind of weaponry. All you do is unleash the weapon and the AI takes over from that point forward. Hopefully, the AI guides the armament to the appropriate destination and detonates or delivers it suitably. There is often very little human-in-the-loop overriding since the process might happen faster than humans could react anyway, or the chance of an enemy hacking the system and preventing the weapon from doing its business is reduced or curtailed by preventing anything other than the AI from driving the ordnance.

Clearly, this is an instance where AI entails life-or-death consequences. You might convincingly argue that we should be mulling over the effects of such dire AI. It is outwardly wise to turn over every stone before we let AI get cast into concrete for autonomous weaponry. Lots of lives are at stake.

Does that same foreboding and solemnity apply to the use of generative-based AI including ChatGPT?

You would be somewhat hard-pressed to say that this type of AI is in the same league as the other type of AI that guides deadly missiles and other munitions. That being said, even if life or death is not on the line, this doesn’t mean that we cannot give due diligence to what adverse impacts generative-based AI can bring to the fore. The stakes might not be the same, nonetheless having genuine concerns about generative-based AI does have merits.

I tend to stratify various AI-related bans into the following spectrum:

  • Absolute Ban
  • Partial Ban
  • Weak Ban
  • No Ban

There is also the other side of the coin, namely seeking to enable or support AI, such as represented in this spectrum:

  • Acknowledgment
  • Mild Acceptance
  • Full Acceptance
  • Mandatory Requirement

The various ranges of bans, along with the spans of acceptance will be handy to consider as we take a look at the recent efforts to forbid the use of ChatGPT.

First, let’s make sure we are all on the same page about what Generative AI consists of and also what ChatGPT is all about. Once we cover that foundational facet, we can perform a cogent assessment of whether bans on ChatGPT are going to be fruitful.

A Quick Primer About Generative AI And ChatGPT

ChatGPT is a general-purpose AI interactive conversational-oriented system, essentially a seemingly innocuous general chatbot, nonetheless, it is actively and avidly being used by people in ways that are catching many entirely off-guard, as I’ll elaborate shortly. This AI app leverages a technique and technology in the AI realm that is often referred to as Generative AI. The AI generates outputs such as text, which is what ChatGPT does. Other generative-based AI apps produce images such as pictures or artwork, while others generate audio files or videos.

I’ll focus on the text-based generative AI apps in this discussion since that’s what ChatGPT does.

Generative AI apps are exceedingly easy to use.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. As mentioned, there are other modes of generative AI, such as text-to-art and text-to-video.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work.

There are additional concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including patently untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

I’d like to clarify one important aspect before we get into the thick of things on this topic.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

If you are interested in the rapidly expanding commotion about ChatGPT and Generative AI all told, I’ve been doing a focused series in my column that you might find informative. Here’s a glance in case any of these topics catch your fancy:

  • 1) Predictions Of Generative AI Advances Coming. If you want to know what is likely to unfold about AI throughout 2023 and beyond, including upcoming advances in generative AI and ChatGPT, you’ll want to read my comprehensive list of 2023 predictions at the link here.
  • 2) Generative AI and Mental Health Advice. I opted to review how generative AI and ChatGPT are being used for mental health advice, a troublesome trend, per my focused analysis at the link here.
  • 3) Context And Generative AI Use. I also did a seasonally flavored tongue-in-cheek examination about a Santa-related context involving ChatGPT and generative AI at the link here.
  • 4) Scammers Using Generative AI. On an ominous note, some scammers have figured out how to use generative AI and ChatGPT to do wrongdoing, including generating scam emails and even producing programming code for malware, see my analysis at the link here.
  • 5) Rookie Mistakes Using Generative AI. Many people are both overshooting and surprisingly undershooting what generative AI and ChatGPT can do, so I looked especially at the undershooting that AI rookies tend to make, see the discussion at the link here.
  • 6) Coping With Generative AI Prompts And AI Hallucinations. I describe a leading-edge approach to using AI add-ons to deal with the various issues associated with trying to enter suitable prompts into generative AI, plus there are additional AI add-ons for detecting so-called AI hallucinated outputs and falsehoods, as covered at the link here.
  • 7) Debunking Bonehead Claims About Detecting Generative AI-Produced Essays. There is a misguided gold rush of AI apps that proclaim to be able to ascertain whether any given essay was human-produced versus AI-generated. Overall, this is misleading and in some cases, a boneheaded and untenable claim, see my coverage at the link here.
  • 8) Role-Playing Via Generative AI Might Portend Mental Health Drawbacks. Some are using generative AI such as ChatGPT to do role-playing, whereby the AI app responds to a human as though existing in a fantasy world or other made-up setting. This could have mental health repercussions, see the link here.
  • 9) Exposing The Range Of Outputted Errors and Falsehoods. Various collected lists are being put together to try and showcase the nature of ChatGPT-produced errors and falsehoods. Some believe this is essential, while others say that the exercise is futile, see my analysis at the link here.

You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.

You can expect to see a new round of expressed wonderment when springtime comes along and the latest in generative AI is released.

I bring this up because there is another angle to keep in mind, consisting of a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that frothily spews out foulness, this could dash the hopes of those AI makers. A societal spillover can cause all generative AI to get a serious black eye. People will undoubtedly get quite upset at foul outputs, which have happened many times already and led to boisterous societal condemnation backlashes toward AI.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

When A Ban Becomes Not Much Of A Ban

Now that we’ve got the fundamentals established, we can dive into the question of putting bans on ChatGPT. We will start with practical realities that come to play.

In the case of the NYC Department of Education, they apparently have blocked access to ChatGPT on their internal networks and their connected devices.

An obvious loophole is that a student could presumably use a different Wi-Fi network via their smartphone or other online provider and readily skirt around the blockage taking place on the campus electronic network. Envision a student sitting in a classroom who for whatever reason decides they want to use ChatGPT. They can go to the settings on their smartphone and choose a Wi-Fi network other than the campus-provided instance. Voila, the student can be using ChatGPT while seated at their desk and presumably performing school-related work.

Ban eclipsed.

Another concern that some have mentioned is that students while at home can obviously use ChatGPT as much as they wish, due to not using the campus network while at home. All that this ban seems to do is attempt to curtail usage while on-campus or otherwise when directly using the campus-provided network (a possibility via remote access too).

Worse still, some lament would be that those students that cannot afford Internet access at home are being denied (in a sense) something that other more affluent students can make use of. Whereas those affected students would have been able to use ChatGPT at school, they aren’t being allowed to do so. Perhaps this is dividing the students into the haves and the have not’s, unfairly so.

A policy with inadvertent adverse consequences, one might suggest.

We can pile more onto the thinly supported back of this attempted ban.

ChatGPT is not the only game in town. There are numerous other generative AI apps. If the ban is based on solely scanning for the ChatGPT app, all of those other generative AI apps are apparently free to roam. A student could use the campus network and opt to select a different generative AI app. By and large, the other such AI apps are comparable and will pretty much do the same things as ChatGPT.

I’ll add a few more straws to see if this camel is going to cave in. The AI maker of ChatGPT has indicated that soon an API (Application Programming Interface) will be made available for the AI app. In short, an API is a means of allowing other programs to go ahead and use a program that makes available a portal into the given application. This means that just about any other program on this planet can potentially leverage the use of ChatGPT (well, as licensed and upon approval by the AI maker of ChatGPT).

Suppose that a company makes an educational app that helps students to do time management. Great, probably heralded by most school districts. The maker of the educational app decides to use the ChatGPT API and ergo provide a generative essay capability inside of their app. You see, their educational app is what the student sees, meanwhile in the background, the app invokes ChatGPT and passes prompts to it, collects the essays generated, and displays those to the student.

A school district that is merely scanning for the ChatGPT app would be highly unlikely to know or discover that on the backend of the educational app is the use of ChatGPT. You could say that ChatGPT is hidden from view. A student that knows that the educational app is calling out to ChatGPT would easily launch the educational app and subvert the ban. Easy-peasy.

I believe that’s probably enough right now on why this particular ban is somewhat shaky.

Wait for a second, the retort goes, if ChatGPT is bad for students, the effort to ban its usage is laudable and we should be applauding these policies. Students that choose to subvert the ban, whether on-campus or off-campus, are only hurting themselves by utilizing something that has negative impacts on student learning. They are going to subvert their own education.

A weak ban is at least an attempt to right this untoward situation, they exhort. Sure, the ban might have gaping holes, but you have to give the administrators credit for trying. Maybe they can tighten up the ban. Perhaps they will figure out additional provisions to make the ban stronger.

Furthermore, the ban has vital symbolic value. The entity is telling everyone that ChatGPT is loath to the education of today’s students. Parents will potentially be alerted. AI makers that provide similar apps will be put on notice. Namely, do not try peddling this ugly stuff to our beloved pupils.

Perhaps strict policies could be drafted and implemented that abundantly declare that no generative AI is allowed for use at any time by any student, regardless of whether on-campus or off-campus. Any student caught using such a generative-based AI app would be subject to harsh penalties, possibly including being expelled from school. Be tough on violators of the policy. Show them you mean business.

Things might take an even further step. If a student uses generative-based AI and tries to surreptitiously get away with doing so, they will forever have a looming shadow over them. At some later point, if it is discovered that a student did use generative AI and failed to tell that they did, they would possibly have their degree revoked or have dour marks placed on their academic record. Slam the lid on those that are contemplating using generative AI. They should be so scared and nervous about breaking the rules that it will prevent them from putting one iota of effort toward doing so. They will be frozen in abject fear.

The count argument to these retorts is that the whole matter seems to be blown out of proportion. Draconian penalties are not the way to go. You are making a mountain out of a molehill. And you are missing the boat on the advantages and benefits of using generative AI.

Allow me to explain what the various mentioned benefits are.

Some believe that generative AI can aid students in devising better essays. A student might use an app such as ChatGPT to prepare an essay that they do not intend to turn in. Instead, they are aiming to study the generated essay. Since the essays are usually well-written, a student can closely inspect the wording, the structure, and other salient aspects. Thus, you could assert that this is a helpful learning tool.

Another advantage to using generative AI is that a student can submit their essay to the AI app and ask for a review of the essay. Apps such as ChatGPT will typically do a surprisingly decent job of dissecting a provided essay. It might not be as insightful as a review by a teacher, but the ease of use and being able to repeatedly use an AI app as much as you like makes this a useful approach (presumably, not in lieu of the teacher, instead augmenting the teacher and their limited availability).

We can keep going.

Often a student might be unsure or ostensibly puzzled when trying to come up with how to proceed on an assigned essay project. They stare at a blank sheet of paper. What are they to do? A sense of desperation and despair overtakes their spirit. Maybe they abandon the effort and resolve that they will take a flunking grade. Sadness ensues.

The student could ask generative AI such as ChatGPT to produce a proposed outline or at least some point-me-there suggestions for the essay. Based on the outputted ideas, the student reworks the structure and then writes the essay. On their own. Whether this use of the AI as a starter or engager is “cheating” depends upon your perspective. Admittedly, the AI app got the student underway, though you could contend that as long as the student wrote the essay, this is a small price to pay that the AI gave merely clues on how to proceed.

Slightly shifting gears, for the qualms about generative AI producing outputs that contain falsehoods or errors, the typical rejoinder is that students already need to realize that whatever they read, whether found on the Internet or elsewhere, can contain misinformation and disinformation. We have to make sure that students develop the appropriate skills needed to discern what is valid versus what is questionable in terms of what they read.

We’ll add generative AI to that list of sources to be scrutinized.

The gist is that students should be shown how to eye any generative AI outputs with a bit of cautionary interpretation and openly question what they read. You can take this a helpful leap forward. Give the students assignments involving using generative AI to intentionally prod the AI app into producing falsehoods. You are getting a twofer. One is that you are showing the students how this AI can generate erroneous outputs, and you are improving their skills at detecting and dealing with misinformation and disinformation. You might claim that we can turn a downside into a kind of upside, using the problems of generative AI as a learning tool on a broader basis for coping with the modern-day world and the deluge of sour and dour information.

I could go on with additional ways to use generative AI for bona fide educational pursuits. For additional coverage, see the link here.

All told, the camp that says we ought to embrace generative AI is bound to point out that you are not going to turn back the clock anyway. Apps like ChatGPT are going to be coming out of the woodwork. You are not feasibly going to find ways to stop the bandwagon. You might as well hop on board.

That being the case, you don’t have to let chaos prevail. This camp urges that schools need to figure out policies that seek to balance the badness of essay generation with the goodness that these AI apps can provide. Show the students how to use these generative AI apps, in the right ways and how to avoid the wrong ways.

Whatever you do, certainly do not blindly unleash the use of generative AI. Work with teachers on setting policies. Make sure teachers are comfortable with using these AI apps. Introduce the generative AI apps to the students and explain what is allowed and not allowed.

The generative AI ship has already sailed.

The genie is out of the bottle.

Knee-jerk reactions to banning these AI apps are ultimately going to be futile. The other concern is that you are somewhat egging students on. By telling them they can’t use this technology, there will potentially be a tidal wave of interest in using it. You risk turning otherwise honest and fair-minded students into becoming bandits, mainly because the schools made a big to-do about invoking bans.

We all know that sometimes a forbidden fruit becomes all the more alluring. It could be that these weak bans will spur student use, far beyond what otherwise might have taken place.

Returning to my earlier indication about stratifying bans, here’s where things seem to land in the case of these recent efforts to ban ChatGPT (which I’ll broadly refer to as Generative AI):

  • Absolute Ban of Generative AI: Not feasible per se and not yet especially tried
  • Partial Ban of Generative AI: Almost what has been tried, but has lots of gaping holes
  • Weak Ban of Generative AI: What seemingly is being tried, rampantly weak and likely ineffectual
  • No Ban on Generative AI: Everyone else that is waiting to see what happens and what to do

There is also the other side of the coin, namely seeking to enable or support Generative AI, such as represented in this spectrum:

  • Acknowledgment of Generative AI: Insists that schools need to at least acknowledge the existence of generative AI
  • Mild Acceptance of Generative AI: Schools should allow the usage of generative AI in limited ways
  • Full Acceptance of Generative AI: Schools ought to embrace generative AI in a comprehensive way
  • Mandatory Requirement of Generative AI: Schools should overtly require the use of generative AI and make it a part of their curriculum and pedagogical methods

Conclusion

The famous editor of the Whole Earth Catalog, Stewart Brand, said this notable line: “Once a new technology rolls over you, if you’re not part of the steamroller, you are part of the road.”

Some fervently believe that schools trying to ban generative-based AI are misguided. They are confused about what this AI can do and how to harness the good along with the bad. They have to wake up and be part of the steamroller, or else they will find themselves appallingly outdated and become a pot-holed forsaken rolled over part of the road (as will their students).

Others contend that it is too early to jump onto the generative-based AI craze.

Either take no action right now or take some mild action. Wait and see what arises. If there is a valid use and need for these AI apps, okay, let’s study this and systematically and cautiously figure out the best course of action entailing them in an educational milieu.

The retort by the advocates is that waiting like this, which could take years, will leave the entire matter in a state of turmoil. All manner of untoward issues will indubitably arise when there isn’t any explicit guidance. Students will be caught off-guard when suddenly they discover to their surprise that they weren’t supposed to be using these AI apps. Some will be accused of using the generative-AI apps when they weren’t doing so, a quite possible false-positive accusation that is bound to be made. On and on the morass will widen and deepen.

What do you think should be done?

Give this some sobering and mindful thought. Yes, be mindful. We are talking about the students of today and their future, and our future too. A fitting remark by Abraham Lincoln might be instructive: “The best way to predict your future is to create it.”

Let’s do so.

Read More