My beginning into machine learning


Below is an essay I had written for a university requirement. I admit, it’s pretty rough. I attempt to condense two essays into the word count of one, and I don’t go into as much depth as perhaps I would’ve liked.

AI’s application in mental health


One can appreciate artificial intelligence’s (AI’s) implication in mental health through the context of two disease: depression and schizophrenia. These are two conditions that are unrelated, and present two different problems. Depression is one of the most prevalent diseases worldwide[1]. ON the other hand, schizophrenia is one of the most disabling mental health diseases[2].


Diagnosis of depression has been a major challenge in the field of psychiatry. One significant issue is that despite being prevalent, it remains misdiagnosed due to poor sensitivity and specificity[1]. This might be due to current methods of diagnosis as being inadequate, as diagnosis is not based on an objective clinical measurement as highlighted when discussing its heterogeneity[2].


We don’t have a neuropathological model of depression. Its been suspected that there is a neuroanatomical basis for the disease, but the results have not been consistent. One explanation for this uncertainty is that depression is a uniform disease. [3] Depression is heterogenous, and can vary in its presentation. For instance, a patient can be diagnosed with the same label of MDD despite having different symptom presentations, as a combination of up to nine symptoms is required for a diagnosis, according to NICE.[13]


800, 000 people die of suicide every year all around the world. According to the Office for National Statistics 2018, there were 6507 suicides in the UK, the equivalent of 11.2 deaths per 100,000 people. [15]

Wider discussion

Its clear the type of data used to train these algorithms impacts their accuracy, their generalisability and their stability. We saw when comparing papers by Schyner DM et al and Rozycki et al that accuracy of the algorithm increases when using multiple sites to train the algorithm. If, like in the Schyner DM et al paper, you use a data set with a small sample size, then train and test the algorithm on that data set, you increase the risk of “overfitting”. Overfitting is the phenomena when the algorithm is able to accurately classify data based not on true differences, but rather differences that exist within that data set only. You can spot this when testing the algorithm on data from a separate set. If we want to use algorithms in clinical practice, those algorithms will have to be trained on data from the patient population.


Its clear that the focus in AI-assisted psychiatry is assessing the performance of these algorithms against clinicians in real time. What clear, based on the effectiveness of the algorithms discussed, is that there is promise in using AI in the diagnosis, research and prediction of mental health conditions


1. World Health Organisation. Suicide Prevent. Available at: Accessed March 8th, 2020

Medical Student at the University of Birmingham

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store