ChatGPT: an artificial intelligence at the heart of concerns

ChatGPT: an artificial intelligence at the heart of concerns

Logo de l'Open AI

In late 2022, the arrival of ChatGPT caused a lot of buzz. This conversational robot, created by the artificial intelligence company OpenAI, aims to be able to respond to all of our queries in the same way a human would. This is both its strength and its weakness. While ChatGPT makes many tasks easier for us, the quality of the results it provides is not flawless and the consequences of its errors can lead to real misinformation. Furthermore, while this new technology may seem simple and very useful at first glance, its similarity to human language can be concerning and many ethical questions arise.

Limited information:

ChatGPT uses a language model trained on numerous and important digital databases to produce texts in human language.

This system is quite impressive: it can produce different texts for the same question, rephrase, and improve. It generally provides fairly objective information: for example, when asked to explain controversial topics (such as wokism, feminism, political parties), it presents the current situation by providing different opinions on the subject without taking sides. And it generally provides accurate information.

However, this model is not without limitations. On the contrary, several points need to be raised :

Firstly, regarding the accuracy of its statements: there are many cases where it is possible to push ChatGPT to provide false information or where it gives an incorrect response to a question (for example, when asked about data theft, a controversial topic in doctrine and jurisprudence, it gives a definitive answer supposedly based on the penal code). When its error is pointed out, it does not hesitate to acknowledge and modify its response. Additionally, the information it has is limited to what was available in 2021. It is impossible to obtain a response when asked about the result of a sporting event or a political election after this date. It is worth noting that version 4 of GPT, which is currently being developed by OpenAI, will likely solve this problem. But for now, it remains a limitation. It is necessary to mention here that Bard, Google’s equivalent of ChatGPT, made an error during a public demonstration: it claimed that the James Webb Space Telescope is the first to take pictures of a planet outside of the Solar System. In reality, this had already been done in 2004…

Next, it should be noted that these factual errors are not the only ones to consider when it comes to the accuracy of the statements provided by GPT 3: cognitive biases are also a flaw of the model to be deplored. Indeed, the documents that GPT 3 has been fed on sometimes contain cognitive biases, such as racism or sexism, which it reproduces in its responses. Therefore, the objectivity of the chatbot is not absolute, although it repeats endlessly, whenever asked for its opinion, that “As an artificial intelligence, [it is] not able to have personal opinions. However, [it can] provide you with a definition of the term and give you a neutral perspective.” Unfortunately, this reuse of biases present in the initial texts perpetuates them, without it being possible to know where this given vision on a specific subject comes from, since ChatGPT does not provide its sources. This last element can also raise some questions regarding the intellectual property of the work done by the AI.

Finally, there is a risk of circulating false information provided by the artificial intelligence, for two reasons: on the one hand, ChatGPT provides its responses with a disconcerting confidence, which leaves no room for doubt and pushes us to trust it, not to verify the answers provided, risking to subsequently spread the gross errors it is capable of producing. On the other hand, we must not overlook the possibility of a wave of fake news due to human intervention: indeed, ChatGPT is capable of inventing texts containing the information given to it, including false information. It would then be possible to disseminate this false information by presenting it as true. But that’s not all: OpenAI’s language model feeds on information present on the internet, so it could subsequently rely on this false information to provide answers to users who question it.

Ethical questions :

ChatGPT is capable of writing high-quality texts, similar to those produced by humans. This raises the question of their use in the field of education. A controversy arose when it was understood that ChatGPT can be a tool facilitating cheating by some students: how will it be possible to determine if a text was written by the student or by ChatGPT? The texts it creates are of very good quality and it never proposes the same one twice, so its use for writing assignments is very difficult to detect.

This controversy forced OpenAI to develop a tool to detect texts written by ChatGPT: GPT Zero. Indeed, some points, such as the structure used or punctuation, are used quite regularly by the AI tool, which can allow its work to be recognized. Once the disputed text is entered into GPT Zero, the tool evaluates, on a scale of 1 to 5, whether the text could have been written by ChatGPT. Therefore, it is not a certain answer and the tool may let some texts slip through. In addition, as currently developed, GPT Zero mainly works on texts over 1000 words and written in English, which significantly limits its effectiveness. This is why separate companies from OpenAI have launched into this subject: on January 16, the site was launched to determine whether or not a text was written by ChatGPT. And plagiarism detection software, which is currently unable to be effective when the conversational robot comes into the equation, will need to improve quickly.

But ultimately, the question that needs to be asked, as several institutions have decided to ban its use (including Sciences Po), is rather that of its integration into education, as has been necessary with the emergence of many other new technologies. Especially since the quality of this AI is likely to make it difficult, in the future, to detect texts written by humans from those written by robots: it is therefore best to learn to master them as soon as possible.

Learn about our vision and analysis of chatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.