Posts

Showing posts with the label Vectorization

Getting Started with Glove

Image
What is GloVe? GloVe stands for global vectors for word representation. It is an unsupervised learning algorithm developed by Stanford for generating word embeddings by aggregating global word-word co-occurrence matrix from a corpus. The resulting embeddings show interesting linear substructures of the word in vector space. Examples for linear substructures are: These results are pretty impressive. This type of representation can be very useful in many machine learning algorithms. To read more about Word Vectorization you can read my other article . In this blog post, we will be learning about GloVe implementation in python. So, let's get started. Let's create the Embeddings Installing Glove-Python The GloVe is implementation in python is available in library  glove-python. pip install glove_python Text Preprocessing In this step, we will pre-process the text like removing the stop words, lemmatize the words etc. You can perform different steps based on your requirements

Word Vectorization

Image
Introduction Machine Learning has become the hottest topic in Data Industry with increasing demand for professionals who can work in this domain. There is large amount of textual data present in internet and giant servers around the world. Just for some facts 1,209,600 new data producing social media users each day. 656 million tweets per day! More than 4 million hours of content uploaded to Youtube every day, with users watching 5.97 billion hours of Youtube videos each day. 67,305,600 Instagram posts uploaded each day There are over 2 billion monthly active Facebook users, compared to 1.44 billion at the start of 2015 and 1.65 at the start of 2016. Facebook has 1.32 billion daily active users on average as of June 2017 4.3 BILLION Facebook messages posted daily! 5.75 BILLION Facebook likes every day. 22 billion texts sent every day. 5.2 BILLION daily Google Searches in 2017. Need for Vectorization The amount of textual data is massive, and the problem with textual dat