Language generation is the hottest thing in AI right now, with a class of systems known as “large language models” (or LLMs) being used for everything from improving Google’s search engine to creating text-based fantasy games. But these programs also have serious problems, including regurgitating sexist and racist language and failing tests of logical reasoning. One big question is: can these weaknesses be improved by simply adding more data and computing power, or are we reaching the limits of this technological paradigm?
This is one of the topics that Alphabet’s AI lab DeepMind is tackling in a trio of research papers published today. The company’s conclusion is that scaling up these systems further should deliver plenty of improvements. “One key finding of the paper is that the progress and capabilities of large language models is still increasing. This is not an area that has plateaued,” DeepMind research scientist Jack Rae told reporters in a briefing call.
DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters).
It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix.
“I think right now it really looks like the model can fail in variety of ways,” said Rae. “Some subset of those ways are because the model just doesn’t have sufficiently good comprehension of what it’s reading, and I feel like, for those class of problems, we are just going to see improved performance with more data and scale.”
But, he added, there are “other categories of problems, like the model perpetuating stereotypical biases or the model being coaxed into giving mistruths, that […] no one at DeepMind thinks scale will be the solution [to].” In these cases, language models will need “additional training routines” like feedback from human users, he noted.
To come to these conclusions, DeepMind’s researchers evaluated a range of different-sized language models on 152 language tasks or benchmarks. They found that larger models generally delivered improved results, with Gopher itself offering state-of-the-art performance on roughly 80 percent of the tests selected by the scientists.
In another paper, the company also surveyed the wide range of potential harms involved with deploying LLMs. These include the systems’ use of toxic language, their capacity to share misinformation, and their potential to be used for malicious purposes, like sharing spam or propaganda. All these issues will become increasingly important as AI language models become more widely deployed — as chatbots and sales agents, for example.
However, it’s worth remembering that performance on benchmarks is not the be-all and end-all in evaluating machine learning systems. In a recent paper, a number of AI researchers (including two from Google) explored the limitations of benchmarks, noting that these datasets will always be limited in scope and unable to match the complexity of the real world. As is often the case with new technology, the only reliable way to test these systems is to see how they perform in reality. With large language models, we will be seeing more of these applications very soon.