Author:  Po-Yu Kao, Angela Zhang, Michael Goebel, Jefferson W. Chen,B, S. Manjunath

Year: 2019

https://link.springer.com/chapter/10.1007/978-3-030-31901-4_2

Abstract

T1-weighted magnetic resonance images were utilized with Stack-Net for fluid intelligence prediction in teenagers. This comprises of feature extraction, normalization, de-noising, feature selection, training a StatckNet, finally the fluid intelligence prediction. Distribution of different brain tissues is the extracted feature and the StackNet has three layers and models of 11. Each layer makes use of the predictions from all preceding layers plus the input layer. This is tested on public standard Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019 and this test produced an 82.42% mean square error score on the combination of the training set and validation set on 10-fold cross-validation. The test data produced a mean square error of 94.25.

Crystallized Intelligence & Fluid Intelligence Defined | IQ Test Prep

Introduction

Fluid intelligence is capability to aim and to answer new difficulties self-sufficiently of prior attained knowledge. The previous study of fluid intelligence prediction with the structural brain MRI shows the brain volume is associated with quantitative cognitive and occupied memory. T1-weighted magnetic resonance images were utilized with Stack-Net for fluid intelligence is utilized for prediction in this paper and the main input is to predict pre-residualized fluid intelligence based on distinct volume distributions and to demonstrate the significance of the volume of the individual regions on the general prediction.

Methodology

Dataset: Dataset used in the paper is the Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge which has a training set of 3739, validation of 415, and testing set of 4402 and the subjects are from ages 9 to 10 years. The MRI was provided for individual but the fluid intelligence score was only provided for training and validation sets.

StackNet Architecture: A StackNet is an analytical context that resembles a feed-forward neural network. We have two different StackNet modes which are:

Respective later directly uses the prediction from the preceding layer.

Respective layer makes use of predictions from all preceding layers with the input layer.

Stack-Net’s skill depends on a mixture of robust and varied sole models to get the finest out of this meta-modeling method. Stack-Net is adapted on some ideas which are about adding more models which has comparable performance, also each layer to have a linear model, having the improved performance model on a higher layer.

This is the Stack-Net overview with three layers (The primary layer is with just This is the Stack-Net overview with three layers (The primary layer is with just one linear regressor but has 5 ensemble-based regressors, the next layer comprises of just 1 linear regressor but two joint based regressors, and the last layer only has 1 linear regressor.) and models of 11(The models has a single Bayesian ridge regressor, four random forest regressors, 1 gradient boosting regressor, it has 3 extra trees regressors,1 kernel ridge regressor,1 ridge regressor)

Fluid intelligence scores Prediction Using Structural magnetic resonance Fluid intelligence scores Prediction Using Structural magnetic resonance Images and Stack-Net: The implementation is done with scikit-learning. The feature extraction in the training phased was done on the magnetic resonance images of training and validation subjects then normalization is applied and selection of feature on the extracted feature, these pre-processed features are used in the Stack-Net training. Features are used in the test phase are extracted from the magnetic resonance images and used of similar pre-processed factors are used for these extracted features. Stack-Net predicts fluid intelligence using the test data.

Predicting fluid intelligence scores with Structural magnetic resonance Images and Stack-Net.

Normalization: A normal score normalization was applied for each feature dimension. fi(j)=(fi(j) − μ(j))/σ(j), i is the index, j is the feature dimension index, fi(j) is both for normalized dimension of I and feature dimension of j and μ(j) is the mean of feature dimension j and σ(j) is for the standard deviation of feature dimension j.

Selecting features: Selecting the feature has 3 stages which is decreasing the noise of dataset and creating a correct depiction of data over principal, eliminating the dimension of feature with the little variance amongst subjects, and also picking twenty feature dimensions with maximum connections to the ground truth fluid intelligence scores.

Training a Stack-Net: The two datasets are combined for hyperparameter optimization since mean of the pre-residualized fluid intelligence for the trainingset and validation data are quite different.

Predicting Fluid Intelligence: Identical pre-processing features utilized in the training step is used in the testing phase.

Evaluation Metric: To compute error amongst the predicted fluid intelligence score we use the mean squared error and the matching ground-truth fluid intelligence score.

Computing feature importance: To determine the connection between fluid intelligence score and brain tissue volume in a region, the computation of the significance of the dimension of the feature and good importance shows a good correlation. The significance of each feature dimension is computed and then backpropagated toward its initial dimension and after the dimension reduction, we get the individual correlation amongst the feature dimension and the ground truth.

Result: The result of eleven models, the Stack-Net by 10-fold cross validation on joint data and the Stack shows the top performance.

Related work

Alice Parr, María Roca, Russell Thompson, Teresa Torralva, Nagui Antoun, Alexandra Woolgar, Facundo Manes, John Duncan

“Executive function and fluid intelligence after frontal lobe lesions”

https://academic.oup.com/brain/article/133/1/234/311206?login=true

This paper discussed shortages after frontal lobe lesions and it appears on background of compact fluid intelligence, best measured with tests of novel problem solving.

Conclusion

The fluid intelligence scores prediction performance of individual models and Stack-Net was examined with 10-fold cross-validation. The Stack-Net that is used for the mean square error on the validation has eight models two layers. The starting point is calculated by assigning the mean fluid intelligence (μ = 0) to all subject in the joint data. The Stack-Net used to report the mean square error on the validation has eight models and two layers and it has 84.04 on training set and 70.56 on validation set then the importance of each dimension is computed for extracted feature where individual dimension of the feature extraction is corresponding to the size of a particular brain type.

Future work

The future work for this to make the predicted fluid intelligence score try and generate the MRI.