Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
post
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
post

PolyU Research Finds AI Helps to Better Align Human Brain Activity

HONG KONG: With generative artificial intelligence (GenAI) transforming the social interaction landscape in recent years, large language models (LLMs) – which use deep-learning algorithms to train GenAI platforms to process language – have been put in the spotlight.

A recent study by The Hong Kong Polytechnic University (PolyU) found that LLMs perform more like the human brain when being trained in more similar ways as humans process language, which has brought important insights to brain studies and the development of AI models.

Current LLMs mostly rely on a single type of pretraining, namely contextual word prediction. This simple learning strategy has achieved surprising success when combined with massive training data and model parameters, as shown by popular LLMs such as ChatGPT.

Recent studies also suggest that word prediction in LLMs can serve as a plausible model for how humans process language. However, humans do not simply predict the next word but also integrate high-level information in natural language comprehension.

A research team led by the Faculty of Humanities Dean, Professor Li Ping and PolyU Humanities and Technology Foundation Professor, Sin Wai Kin has investigated the next sentence prediction (NSP) task, which simulates one central process of discourse-level comprehension in the human brain to evaluate if a pair of sentences are coherent, into model pretraining and examined the correlation between the model’s data and brain activation.

The study has been recently published in the academic journal Sciences Advances.

Recent LLMs, including ChatGPT, have relied on vastly increasing the training data and model size to achieve better performance.

“There are limitations in just relying on such scaling. Advances should also be aimed at making the models more efficient, relying on less rather than more data. Our findings suggest that diverse learning tasks such as NSP can improve LLMs to be more human-like and potentially closer to human intelligence,” Li Ping said.

“More importantly, the findings show how neurocognitive researchers can leverage LLMs to study higher-level language mechanisms of our brain.

“They also promote interaction and collaboration between researchers in the fields of AI and neurocognition, which will lead to future studies on AI-informed brain studies as well as brain-inspired AI,” he added.

Share this post :

Facebook
Twitter
LinkedIn

Create a new perspective on life

Your Ads Here (365 x 270 area)
Latest News

Subscribe our newsletter

Scroll to Top

Subscribe
FREE Newsletter