Subscribe now

Technology

Minecraft could be the key to creating adaptable AI

Researchers have a new way to assess an AI model’s intelligence: drop it into a game of Minecraft, with no information about its surroundings, and see how well it plays

By Matthew Sparkes

9 February 2024

Minecraft is a game for humans, but it could help AI too

Minecraft

Minecraft is not only the best-selling video game in history, it could also be key to creating adaptable artificial intelligence models that can pick up a variety of tasks the way humans do.

Steven James at the University of the Witwatersrand in South Africa and his colleagues developed a benchmark test within Minecraft to measure the general intelligence of AI models. MinePlanner assesses an AI’s ability to ignore unimportant details while solving a complex problem with multiple steps.

Lots of AI training “cheats” by giving a model all the data it needs to learn how to do a job and nothing extraneous, says James. That is a fruitful approach if you want create software to accomplish a specific task – such as predicting the weather or folding proteins – but not if you are attempting to create artificial general intelligence, or AGI.

James says that future AI models will need to tackle messy problems, and he hopes that MinePlanner will guide that research. AI working to solve a problem in the game will see the landscape, extraneous objects and other detail that isn’t necessarily needed to solve a problem and must be ignored. It will have to survey its surroundings and work out by itself what is and is not needed.

MinePlanner consists of 15 construction problems, each with an easy, medium and hard setting, for a total of 45 tasks. To complete each task, the AI may need to take intermediate steps – building a set of stairs in order to place blocks at a certain height, for instance. That demands that the AI can zoom out of the problem and plan ahead in order to achieve a goal.

Sign up to our The Weekly newsletter

Receive a weekly dose of discovery in your inbox.

In experiments with state-of-the-art planning AI models ENHSP and Fast Downward, open-source programs designed to handle sequential operations in pursuit of an overall goal, neither model was able to complete any of the hard problems. Fast Downward was only able to solve one of the medium problems, and five of the easy problems, while ENHSP performed slightly better by completing all but one of the easy problems and all but two of the medium problems.

“We can’t require a human designer to come in and tell the AI exactly what it should and shouldn’t care about for each and every task it might have to solve,” says James. “That’s the problem we’re trying to address.”

Topics:

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

Sign up