观点人工智能

No, you can’t tell when something was written by AI

Context matters as much as content in determining whether text is machine generated or not

It’s easy to spot when something was written by artificial intelligence, isn’t it? The text is so generically bland. Even if it seems superficially impressive it lacks edge. Plus there are the obvious tells — the em dashes, the “rule of three” examples and the constant use of words like “delve” and “underscore”. The writing, as one machine learning researcher put it, is “mid”.

Yet every single one of these apparently obvious giveaways can be applied to human writing. Three consecutive examples are a common formulation in storytelling. Words like “underscore” are used in professional settings to add emphasis. Journalists really love em dashes. None of it is unique to AI.

Read the “how to spot undisclosed AI” guides from the likes of Wikipedia and you’ll receive a lot of contradictory advice. Both repetition and variation are supposed to be indicators. Even AI detection tool providers acknowledge that, because AI models are evolving and “human writing varies widely”, they cannot guarantee accuracy. Not that this has stopped a cottage industry of online “experts” declaring that they can just tell when something apparently written by a person was really generated by AI.

您已阅读26%(1182字),剩余74%(3432字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×