Word embedding is a machine learning language model which converts words into vectors in a vector space based on the context in which words are found. These vectors are then able to show semantic relations between words and concepts.
Since I was interested in understanding if people suffering from mental health disorders use different semantic fields when they speak and write compared with a healthy group of people, I decided to use and test distributional semantics models for this purpose. These new machine learning models require to have as input a massive amount of textual data to learn from. Social media came to help in this context and they became for researchers a rich textual resource to which apply natural language processing tools in order to find patterns and styles of writing among users.
My motivation was steered also by a broader objective. Tackling some cognitive mechanisms in depressed and ill patients and addressing the usage of language during a recovery program could turn to be effective in the modification of negative thought and in the decentralization from the self, towards an ultimate inclusion of other people support, presence and company.