College Student Made App That Exposes AI-Written Essays

typodupeerror
  • The problem with AI is that if it is writing an essay, it has no clue what it is actually doing.

    In my opinion, everything it has spit out is basically reiterating on the prompt over and over again, basically saying the same thing two or three times and then making an awkward jump into adjacent subject and repeating that. Any human should be able to detect a large body of text being generated by AI, if they actually read the text.

    The problem is teachers donâ(TM)t read essays, they could just put it in a

    • The problem with AI is that if it is writing an essay, it has no clue what it is actually doing.

      To be fair, “s/AI/most students/” would probably also be accurate.

    • That sounds a lot like the kinds of writing I handed in for assignments in junior high and high school, If I’m going to be perfectly honest.

  • I wrote a ChatGPT detector the day that ChatGPT was released to the public! It works really good, too.

    I was about to release it to the world, but then my AI-detecting AI program advised me that if I don’t release the source code, and don’t show proof of my algorithm’s effectiveness by running it against several thousands of samples, I can just claim to the world that I’ve achieved success without needing to actually prove it at all!

    • I wrote a ChatGPT detector the day that ChatGPT was released to the public! It works really good, too.

      His tweet listed in TFA notes:

      “I spent New Years building GPTZero…”

      meaning he spent either one day, one weekend, or one holiday week developing this, when most people were drinking and partying, so it *must* be good and accurate./sarcasm

      Going forward, can’t wait for someone who actually wrote their essay to get erroneously flagged by this (or another app) as having the essay written by AI and then marked down/failed and see how that all falls out…

      • Wouldn’t it be ironic if he made use of GitHub’s Copilot to write his app.

  • … and how many students will be falsely flagged as AIgiarists by this tool (or others like it).

    • Putting new meaning into monotone and robotic writing.

    • Just to be safe they’ll want to run the tool against the essay that they totally wrote all by themselves. If it flags they’ll want to tweak it up a bit till it correctly identifies as being totally honest-to-goodness written by them.

      • That will be simple. Just put in a few spelling and grammar errors because the AI is programmed not to do such things.

        • Nor are students if they use regular spelling/grammar checking functionality of the application they use to write the essay. If it would be that easy to fool the AI, yu could just use the AI written essay and change some grammar/spelling.

  • Teachers should assume any take-home writing assignment will be influenced by friends, Wikipedia, paid writing centers, etc. This is how writing happens in the real world. Chat Bots are just another tool.

    If you really want to know what a student can do on their own, then sit them with a pencil and a piece of paper in a silent faraday cage for an hour and see what they produce. But that would be pointless, because that’s not how anyone in the real world writes.

    • Far easier: Have them give a talk about the text they wrote and make them answer questions. I never, ever had to just hand in a writing assignment after finishing school. I always had to present and defend it. Of course that was in CS studies (MA and PhD), not some field were writing is a core-skill…

    • Not by the few sydents that actually do the assignments themselves, I’ve even heard that a few of them like it, to them this is a boon, as it filters out the cheaters ” ie the ones thst just app,y a bit of electricity to the problem. Hold on teachers don’t read essays? And this js well known by the students? In that case we have a serious issue, writing stuff us boring, and if you know it’s noy going to be read by anyone , not even the person that gave you the task in the ferst place, the incentive not to

  • Student finds himself no longer invited to parties.

  • The key here is that ChatGPT is closed-source, and that OpenAI has no interest in hiding ChatGPT’s authorship.

    That said, even with the above handicaps (i.e. one side isn’t even playing the cat-and-mouse game), I suspect the detector could be fooled, just by, for example manually-rewriting a phrase within the text and re-submitting it to ChatGPT for continuation from that point. Or something comparably simple. No way to test it at this point.

    Long-term, there will not be a reliable detector. Any such detector would require detecting statistical regularities that could then be automatically avoided. Without even searching I’m sure somebody has already used generative adversarial networks for this.

    • Long-term, there will not be a reliable detector. Any such detector would require detecting statistical regularities that could then be automatically avoided.

      1) A text generator is created.

      2) A tool to check for generated text is created.

      3) A new version of generator is released. It includes a test from step 2), and only outputs text that passes the test as clean.

      4)

      Finally nothing, not even a body of human experts, can distinguish ai- and human-generated text.

      We have had a similar fight between viruses and anti-viruses for a long time. Right now antiviruses are winning because they look at the behaviour of the virus (dynamic analysis) rather than its code (static analysis). But advertisements have blurred the line between “good” and “bad” code so that nobody can really say what is good or bad. The world seems to happily accept such compromise. My browser renders both real Slashdot content and ads all the same, and I don’t care.

      I guess the same will happen with text.

      We will just accept that some of it is not real human speech.

  • On a bad or unlucky day, a human might write just like chatGPT would, especially for a short thing of a paragraph or two.

    So I’m not sure what the value of this is. Maybe a prof/teacher would use it on a student’s alleged work, then confront them in an interrogation and hope that they break and confess. Doesn’t seem that reasonable of a process.

    • Maybe a prof/teacher would use it on a student’s alleged work, then confront them in an interrogation and hope that they break and confess.

      This is what happens when they fail the Turing test.

    • This is exactly what I was thinking as well… or maybe not necessarily so bad or unlucky, to be honest. I actually think it’s quite probable.

      What concerns me is the chance that the cheaters are going to take additional precautions to avoid detection by using this tool themselves to ensure that what they hand in has a low chance of being seen as automatically generated, while people who don’t cheat in the first place and don’t cross check their submissions with such tools may actually be more likely to

    • Naa, just us it as an indicator of “low quality, insightless reasoning”. That way you can fail them for the quality, not because of plagiarism. ChatGPT produces a lot of bullshit when asked real questions and has a bad tendency to make things up.

  • If you use a publicly available AI-paper detecting tool, then when you have ChatGPT, you only need to run this tool against it to see if it flags it. If it flags it, regenerate, or change the output enough that it’s no longer flagged. Remember, this is a cat/mouse/cat/mouse thing. Make a better forgery detector and you’ll just end up with better forgeries.

  • It’s already here. For advertising clicks, this type of nonsense is already being generated



    https://www.systranbox.com/unlock-your-potential-exploring-the-many-ways-to-utilize-your-linux-experience/

    I’ve seen this template all over the place when searching for technical info on configuring mail servers, and similar. The search engines, as garbage as they are, are going to get much worse. They should filter this crap out, but that would require

    …. intelligence?

    or “the dangers of eating rocks” yields this gem

  • AI written essay may be detectable, but when the AI incorporates the results of AI detectors in its cost function, it’ll be a mess. And more AI will come up with better measures to classify AI writings from genuine Human writings.

    But would it matter? For the next obvious big thing to do is to have AI readers. Who has time to read all that stuff? Have a robot summarize it.

    Once we have both ends of the literature pipeline totally AI-ized, no Human will ever write anything, or ever read anything. It’s all bi

You are in a maze of little twisting passages, all alike.

Working…

Read More

msmash