In an age of dreadful AI, large language models need three things

In an age of dreadful AI, large language models need three things ...

After The Washington Post reported last week that a Google engineer thought that LaMDA, one of the company''s large language models (LLM), was a good idea.

After a slew of articles, videos, and social media debates on whether current AI systems understand the world as we do, whether AI systems can be conscious, and what are the prerequisites for consciousness.

We are currently in a state where large language models have become enough to convince many individuals, including engineers, that they are on the same page as natural intelligence. Moreover, they are still bad enough to make flaws, as the computer scientist shows.

This is because large IT corporations are looking to commercialize their technologies by integrating it into applications designed for hundreds of millions of users. It is important that these applications remain safe and robust in order to avoid confusing or harming their users.

Transform 2022

In person, you can join us at the leading conference on application AI for business and technology decisions from July 19 and around from July 20 to 28.

Here are a few highlights from the hype and ambiguity that characterizes large language models and advances in artificial intelligence.

More transparency

Technology firms, contrary to academic institutions, have a habit of publishing their AI models to the public. They consider them as trade secrets to be hidden from competitors. This makes it extremely difficult to examine them for possible consequences.

Fortunately, there have been some positive developments in recent months. In May, Meta AI introduced one of its LLMs as an open-source program (with some exceptions) to enhance transparency and openness to the development of large language models.

Through providing access to model weights, training data, training logs, and other valuable information about machine learning methods, researchers may be able to detect their weaknesses and ensure they are effective in areas where they are effective.

Undoubtedly, transparency is also important in confirming to users that they are engaging with an AI system that does not necessarily understand the world as they do. Today, AI systems are very effective in carrying out narrow tasks that do not require a large knowledge of the world. But they begin to fall apart as soon as they are enlisted against issues that require commonsense knowledge not discussed in text.

As much as large language models have advanced, they still need hand holding. By knowing that they are interacting with an AI agent, users will be able to adapt their behavior to avoid steering the conversation into unpredictable environments.

More human control

As AI becomes more advanced, we should give it more control in making decisions. Nevertheless, until we understand how to design human-level AI (and thats a big if) we should design our AI systems to complement human intelligence, not replace it. In a nutshell, just because LLMs have significantly improved at processing language does not mean that humans must only interact with them via a chatbot.

In his book Human-Centered AI, an effective approach to human oversight and control is directed by human-centered AI. In addition, computer scientist Ben Schneiderman provides a complete outline for HCAI. For example, AI systems should provide confidence scores that explain how reliable their output is. Other options include multiple output suggestions, configuration sliders, and other tools that provide users with control over the behavior of the AI system they are using.

Another field of study is explainable AI, which tries to develop tools and techniques for measuring deep neural networks. Naturally, large neural networks like LaMDA and other LLMs are quite difficult to explain. However, explainability should remain a crucial criteria for any applied AI system. In some instances, having an interpretable AI system that performs slightly poorer than a complicated AI system can stifle the degree of confusion that arise.

More structure

In his book Doing AI, Richard Heimann, the chief AI officer at Cybraics, proposes that organizations do AI last because they should instead rather than trying to adopt the latest AI technology. Developers should begin with the problem they want to address and select the most effective solution.

This is an approach to the hype surrounding LLMs, since they are often described as general problem-solving tools that can be applied to a wide range of applications. However, many applications may not be as attractive as large language models but are typically more resource-efficient and robust.

The combined knowledge graphs and other forms of structured knowledge are also important in research. This is a step back from the current process of addressing AIs problems by constructing larger neural networks and training datasets. A121 Labs Jurassic-X is a neuro-symbolic language model that connects neural networks with structured information providers to ensure its answers remain consistent and logical.

In their latest book Linguistics for the Age of AI, researchers from Rensselaer Polytechnic Institute have proposed architectures that blend neural networks and other techniques to ensure their inferences are grounded in real-world knowledge.

While scientists, researchers, and philosophers continue to debate whether AI systems should be given personhood and civil rights, we must not forget how these AI systems will affect the real person who will be using them.

You may also like: