
代號:
頁次:
-
請依下文回答第 46 至第 50 題:
Soon after ChatGPT debuted in 2022, researchers tested what the artificial intelligence (AI) chatbot would write after
it was asked questions peppered with conspiracy theories and false narratives. The results — in writings formatted as
news articles, essays and television scripts — were so troubling that the researchers minced no words in their criticism
of the new technology. Researchers predict that generative technology like ChatGPT could make disinformation cheaper
and easier to produce for an even larger number of conspiracy theorists and spreaders of disinformation. Personalized,
real-time chatbots could share conspiracy theories in increasingly credible and persuasive ways, researchers say,
smoothing out human errors like poor syntax and mistranslations and advancing beyond easily discoverable copy-paste
jobs. And they say that no available mitigation tactics can effectively combat it.
Predecessors to ChatGPT, which was created by the company OpenAI, have been used for years to pepper online
forums and social media platforms with comments and spam. Microsoft had to halt activity from its Tay chatbot within
24 hours of introducing it on Twitter in 2016 after trolls taught it to spew racist and xenophobic language. ChatGPT is
far more powerful and sophisticated. Supplied with questions loaded with disinformation, it can produce convincing,
clean variations on the content within seconds, without disclosing its sources. Recently, Microsoft and OpenAI introduced
a new Bing search engine and web browser that can use chatbot technology to plan vacations, translate texts or conduct
research.
OpenAI researchers have long been nervous about chatbots falling into villainous hands. In a 2019 paper, they voiced
their concern about their chatbot’s capabilities to lower costs of disinformation campaigns and aid in the malicious pursuit
of monetary gains, particular political agendas, and/or desires to create chaos or confusion. OpenAI uses machines and
humans to monitor content that is fed into and produced by ChatGPT. The company relies on both its human AI trainers
and feedback from users to identify and filter out toxic training data while teaching ChatGPT to produce better-informed
responses. OpenAI’s policies prohibit use of its technologyto promote dishonesty, deceive or manipulate users or attempt
to influence politics; the company offers a free moderation tool to handle content that promotes hate, self-harm, violence
or sex.
46 What is the main idea of this passage?
ChatGPT should not be used by conspiracy theorists and fake news spreaders.
ChatGPT developers believe their chatbot can help users discern misinformation.
Newly developed chatbots like ChatGPT can produce well-formatted news articles.
Misinformation is a serious problem with ChatGPT, which is hard to solve.
47 Which of the following statements is true about the Tay chatbot?
It was removed from shelves on the first day in the market.
It was an advanced chatbot developed by OpenAI.
Microsoft debuted it in 2016 to compete with ChatGPT.
It was a failure because it was slow in learning new content.
48 According to the passage, what could be the best use of ChatGPT?
Screening out sensitive content that promotes hate, self-harm, violence or sex.
Designing travel itineraries, designing studies, or writing TV scripts instantly.
Recognizing malicious software and political materials when processing input data.
Lowering costs of disinformation campaigns for politicians.
49 According to the passage, which of the following is NOT done by ChatGPT designers in order to reduce
disinformation?
Warning users that ChatGPT may sometimes respond with biased answers.
Using humans and algorithms to check improper content generated by ChatGPT.
Avoiding grammatical errors and wrong translations in the produced texts.
Hiring humanAI trainers to remove bad training materials from the database.
50 Which of the following best describes the purpose of this passage?
To promote the use of ChatGPT as a new search engine.
To explain how ChatGPT was developed by misinformation researchers.
To raise alarms about newly developed AI chatbots like ChatGPT.
To reduce readers’worries about intelligent AI chatbots like ChatGPT.