Take a fresh look at your lifestyle.

- Advertisement -

Analysis | Is ChatGPT an eloquent robot or a misinformation machine?

Remark

Chatbots are replacing humans in call centers, but they aren’t as good at answering more complex customer questions. That may be about to change, if ChatGPT’s release is anything to go by. The program collects massive amounts of information to generate natural-sounding text based on questions or prompts. It can write and debug code in a range of programming languages ​​and generate poems and essays – and even mimic literary styles. Some experts have stated that it is a groundbreaking feat of artificial intelligence that could replace humans for a host of tasks, and a potential disruptor for major companies like Google. Others warn that tools like ChatGPT can flood the web with clever-sounding misinformation.

1. Who is behind ChatGPT?

It was developed by the San Francisco-based research lab OpenAI, co-founded by programmer and entrepreneur Sam Altman, Elon Musk, and other wealthy Silicon Valley investors in 2015 to develop AI technology that “benefits all of humanity.” OpenAI has also developed software that can beat humans in video games and a tool known as Dall-E that can generate images – from photorealistic to fantastical – from text descriptions. ChatGPT is the latest version of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs. It’s currently free to use as a “research preview” on the OpenAI website, but the company wants to find ways to monetize the tool.

OpenAI investors include Microsoft Corp., which invested $1 billion in 2019, LinkedIn co-founder Reid Hoffman’s charitable foundation, and Khosla Ventures. Although Musk was a co-founder and an early donor to the nonprofit, he ended his involvement in 2018 and has no financial interest, OpenAI said. OpenAI moved to create a for-profit entity in 2019, but it has an unusual financial structure: return on investment is limited for investors and employees, and all profits after that go back to the original nonprofit.

The GPT tools can read and analyze sections of text and generate sentences similar to how humans talk and write. They are trained in a process called unsupervised learning, which involves finding patterns in a dataset without labeled examples or explicit instructions on what to look for. The most recent version, GPT-3, incorporated text from all over the web, including Wikipedia, news sites, books, and blogs, in an effort to make the answers relevant and well-informed. ChatGPT adds a conversation interface on top of GPT-3.

3. What was the response?

Over a million people signed up to use ChatGPT in the days following its launch in late November. Social media is abuzz with users trying fun, low-cost applications of the technology. Some have shared his answers to obscure trivia questions. Others marveled at the sophisticated historical arguments, college essays, pop song lyrics, cryptocurrency poems, meal plans that meet specific dietary needs, and solutions to programming challenges.

4. What else can it be used for?

A possible use case is as a replacement for a search engine such as Google. Instead of sifting through dozens of articles on a topic and firing off a line of relevant text from a website, it could yield a tailored answer. It could take automated customer service to a new level of sophistication, providing a relevant answer the first time, so users don’t have to wait to talk to a human. It could draft blog posts and other types of PR content for companies that would otherwise need the help of a copywriter.

5. What are the restrictions?

The answers compiled by ChatGPT from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it really does is spit out text that reads well and sounds smart, but may be incomplete, biased, partially wrong, or sometimes nonsense. The system is only as good as the data it has been trained with. Devoid of useful context, such as the source of the information, and with few typos and other inaccuracies that often indicate unreliable material, the content can be a minefield for those who are not sufficiently versed in a subject to warrant a flawed response. This issue led StackOverflow, a computer programming website with a coding advice forum, to ban ChatGPT answers because they were often inaccurate.

6. What about ethical risks?

As machine intelligence becomes more sophisticated, so does the potential for deceit and mischief. Microsoft’s AI bot Tay was removed in 2016 after some users taught it to make racist and sexist comments. Another, developed by Meta Platforms Inc., ran into similar problems in 2022. OpenAI has attempted to train ChatGPT to reject inappropriate requests, limiting its ability to spread hate speech and misinformation. Altman, CEO of OpenAI, has encouraged people to “thumb up” unsavory or offensive comments to improve the system. But some users have found workarounds. At its core, ChatGPT generates word chains, but does not understand their meaning. It may not pick up on the gender and racial biases that a human would notice in books and other texts. It is also a potential weapon for deceit. Teachers are concerned about students getting chatbots to do their homework. Lawmakers can be inundated with letters ostensibly from voters complaining about proposed legislation with no idea if they are real or generated by a chatbot used by a lobbying firm.

More stories like this are available at bloomberg.com

Leave A Reply

Your email address will not be published.