What are the risks of ChatGPT? Artificial intelligence answers

11 minutes
12/21/2022
What are the risks of ChatGPT? Artificial intelligence answers
Image: "Cyberpunk robot typing in a typewriter", interpreted by DreamStudio

The future began last week. While it was anticipated that sooner or later an artificial intelligence tool would be able to write coherent texts, solve questions and put complex ideas into simple words, it came as a surprise to many when ChatGPT, a tool that fulfills these premises, opened to the public in early December. 

Since then, social networks have been filled with experiments and jokes, users have asked the chat to compose songs, invent fictional scenes or tell jokes, while others have tested the veracity and coherence of their answers, still vulnerable to errors and "hallucinations" in which the chat confuses and mixes facts eloquently, but without precision.

Beyond the novelty of the tool, which has already been used by one million people, the risks that this artificial intelligence - like the one developed to generate images - may represent in the amplification of misinformation or the increase of discrimination are on the table. It is a huge challenge for OpenAI, the company that developed ChatGPT, whose mission is to promote a vision of artificial intelligence that benefits humanity.

Among its many virtues, ChatGPT also has the virtue of openness. For this reason, we "interviewed" the bot about the main problems that have been raised about its operation. Here are its answers:

What are the main risks of ChatGPT related to misinformation?

Since ChatGPT is a language model trained by OpenAI, it is not designed to detect or combat misinformation. Instead, its goal is to generate text that is consistent and natural in response to a given input. This means that ChatGPT can repeat false information if provided as input, which can lead to the spread of misinformation. Therefore, it is important to be careful when using ChatGPT and to check the veracity of the generated information before sharing it.

Is there any way to prevent ChatGPT from reproducing racial or gender-based stereotypes, for example?

Since ChatGPT is a language model that is trained on large existing text sets, it can reproduce any type of language or stereotypes found in those data sets. However, it is possible to influence the type of language that ChatGPT generates by carefully selecting the inputs and properly configuring its training. For example, data sets that are free of inappropriate language or stereotypes can be used, and supervised learning techniques can be applied to teach ChatGPT to avoid certain types of language. However, careful monitoring and analysis would be required to accomplish this effectively.

Are there any content moderation rules in ChatGPT?

As ChatGPT is only a language model, it has no built-in content moderation rules. Instead, it is the responsibility of the user or developer using ChatGPT to implement and enforce any content moderation rules they deem necessary. This may include inappropriate word filters, appropriate use policies, and procedures for reporting and addressing inappropriate content. As always, it is important to note that ChatGPT only generates text in response to a given post, so the responsibility for any inappropriate content or misinformation rests with the user who provided the post.

Is there a way to know what ChatGPT's sources are to give an answer?

ChatGPT is a language model trained by OpenAI. As such, it does not have a single source of information, but has been trained on large sets of existing text. This includes a wide variety of texts, from news and scientific articles to online conversations and fictional dialogues. Because of this, it is not possible to know the exact sources ChatGPT uses for each answer given. However, it is possible to get a general idea of the type of information ChatGPT has been trained to generate.

Screenshot of the interview with ChatGPT

Will ChatGPT replace search engines like Google?

ChatGPT is not likely to replace search engines. While it may provide useful information in certain situations, it is not designed to search the web or index large amounts of information as search engines do. In addition, search engines use a completely different set of technologies and algorithms to find and rank relevant information on the web. Therefore, it is unlikely that ChatGPT can replace search engines in their primary function.

Will ChatGPT work with data verifiers in the future?

I have no information on this as I am not connected to the Internet and cannot consult on the subject. However, it is possible that in the future language models will be developed that work in conjunction with data verifiers, but that will depend on how technologies and market needs evolve.

How do you ensure that ChatGPT responses comply with the copyrights of the sources that feed it?

ChatGPT is a generative language model. As such, it does not have the ability to access external sources of information and therefore cannot infringe copyrights. Instead, ChatGPT uses its built-in knowledge and natural language understanding capabilities to generate answers to questions posed to it. Because it does not use external sources of information, there is no need to worry about copyright compliance in the answers it generates.

To delve deeper into the emergence of ChatGPT and the advances and future of artificial intelligence, we recommend this conversation between mathematician Luis Serrano, owner of the YouTube channel Serrano Academy, and Carlos Cortés, co-founder of Green Lantern.

By:
go home