Researchers of Central South University, Changsha proposed an end-to-end bio-inspired model based on the convolutional neural network (CNN) and attention mechanism, named HyperAttentionDTI, for predicting drug-target interactions.

In today’s world of biomedicine, drug discovery and repurposing are highly valued. Drug–target interactions (DTIs) are an essential part of drug discovery and repurposing. However, finding DTIs in the wet lab is exceedingly costly and time-consuming due to the large-scale chemical space. Virtual screening (VS) has been developed to facilitate experimental drug discovery research in silico to decrease the time and lower cost efficiently. Predicting DTIs using deep learning technology has been a research focus due to the vast amount of biological activity data produced in recent years.

Only certain parts of a protein or several atoms of a drug are involved in the intermolecular interactions rather than the whole structure. To model the intermolecular interactions between amino acids and atoms, the attention mechanism is introduced in DTI and drug–target affinity (DTA) prediction.

The attention-based DTI model encodes a drug into a fixed-length vector and uses the one-side attention mechanism to determine which protein subsequences are more significant for the molecule.

To predict DTIs, the researchers suggest HyperAttentionDTI, a bio-inspired end-to-end strategy. The Simplified Molecular Input Line Entry System (SMILES) string of drugs and amino acid sequence of proteins are used as inputs in this model. To learn feature matrices from the input, stacked one-dimensional convolutional neural network (1D-CNN) layers were utilised. For each amino acid–atom pair, this model infers an attention vector. These attention vectors govern the depiction of characteristics on the channel and present interactions between amino acids and atoms.

They compare this model with the state-of-the-art deep learning baselines on three wide-used datasets under four different drug discovery settings. The DrugBank dataset, Davis dataset, and KIBA dataset are the three datasets. They sampled from unlabeled drug-protein couples to generate negative samples for the DrugBank Database, resulting in a balanced dataset with equal positive and negative samples. The Davis and KIBA datasets, on the other hand, were unbalanced benchmark datasets.

HyperAttentionDTI consists of three parts: CNN block, attention block and output block. Given the drug’s SMILES strings and protein’s amino acid sequences, CNN block extracts feature matrices from the sequences of drugs and proteins. Then the feature matrices are fed into the attention block to get a decision vector. Finally, the output block performs prediction according to the decision vector.

HyperAttentionDTI takes protein amino acid sequences and SMILES strings of drugs as input. It begins with two embedding layers, which convert each amino acid and SMILES character to the appropriate embedding vector. We acquire the embedding matrix for protein and drug after embedding layers.

In this model, there are two distinct CNN blocks, one for drugs and one for proteins, each with three consecutive 1D-CNN layers that can extract sequence semantic information efficiently. CNN blocks generate latent feature matrices for proteins (Pcnn) and drugs(Dcnn) using embedding matrices from embedding layers.

The researchers created a new attention module called HyperAttention. It models the semantic interdependencies between drug and protein subsequences in spatial and channel dimensions.

Attention matrix for drugs and proteins is constructed using the ‘Dcnn’ and ‘Pcnn’. It describes the spatial and channel interactions between a drug and a protein. The latent feature matrices are then updated, and the feature vectors for both drug and protein are obtained using the global-max pooling process. The output block is then fed these feature vectors, which are concatenated.

Multilayer fully connected neural networks(FCNNs) make up the output block. To prevent overfitting, each FCNN is followed by a Dropout layer. The probability reflecting the possibility of interaction is output by the last layer.

Image Description: The network architecture of HyperAttentionDTI
Image Source: HyperAttentionDTI: improving drug–protein interaction prediction by sequence-based deep learning with attention mechanism

Because DTI prediction is a classification task, the researchers utilise five criteria to measure model performance: accuracy, precision, recall, AUC (area under the receiver operating characteristic curve), and AUPR (area under the precision-recall curve).

When predicting the interaction between a drug and a protein in the testing set, there were four different experimental settings to make a comprehensive comparison: (E1) Both drug and protein appear in the training set, (E2) There are no interactions of the drug in the training set, while there are interactions of protein, (E3) no interactions of protein in the training set, while there are interactions of the drug, (E4). Neither drug nor protein appears in the training dataset.

On comparing this model with other state-of-the-art deep learning baselines models on the DrugBank dataset under the setting E1, it was found that this approach outperforms the other baselines in all five metrics.

On Davis and KIBA under the setting E1, MolTrans and GNN-CPI(two of baseline models used for comparing) get the highest precision, respectively, but this model far outperforms baselines on the other four metrics.

To assess the model’s robustness, the researchers evaluate the model and baselines in a de novo setting (E2, E3, and E4) on the DrugBank dataset. Also, E2 and E3 are more actual conditions than setting E1 on drug development. The Hypertension DTI achieved higher performance over the baselines, with the capacity to handle realistic situations on drug discovery.

All models showed a significant performance reduction in the most difficult setting E4. However, this model still outperformed the others. The model and the baselines were also compared under these de novo settings on the Davis and KIBA datasets, and the results were similar.

When comparing models with and without attention mechanisms, it was discovered that using attention mechanisms definitely improves. This shows that in the DTI inference phase, the association between drug and protein properties must be established. Furthermore, HyperAttentionDTI has the most significant results, indicating that the suggested attention mechanism is better suited to use in a CNN-based model than traditional attention mechanisms.

To attain advanced performance in the future, the researcher suggested employing the graph structure of drugs, which is more natural than the SMILES string, and creating molecular graph neural networks. Furthermore, because this model cannot pinpoint the binding sites properly, scientists can include binding site label information and employ multi-task learning to stretch HyperAttentionDTI’s limits further.

Story Source:  Zhao, Q., Zhao, H., Zheng, K., & Wang, J. (2022). HyperAttentionDTI: improving drug–protein interaction prediction by sequence-based deep learning with attention mechanism. Bioinformatics, 38(3), 655-662.Data And Code Availability: https://doi.org/10.1093/bioinformatics/btab715

LEAVE A REPLY

Please enter your comment!
Please enter your name here