Scraping Reddit API using only JavaScript

In the previous article, I discussed how to plot live data from twitter using HighCharts, in this article we will see how Reddit can be scraped using only JavaScript without authorization mechanism. It is simple, easy and since we are only using Front End to fetch the data then we also don’t need a local server.

Scraping Reddit API using only JavaScript

0 comments:

Scraping and Visualizing Twitter Streaming API using PHP and JavaScript

Web Scrapping is one of the important way to extract data from the web, an html web page can be scrapped. Similarly, JSON or XML web service response can also be scrapped. So far there are lot of stuff written on how to scrape Twitter API but none of them have shown how to visualize the live stream API. This article will show how to use JavaScript Library HighCharts to plot a live graph.


Scraping and Visualizing Twitter Streaming API using PHP and JavaScript

0 comments:

Predicting the Prices of Melbourne Houses using XGBoost from Caret in R

In the previous article we used XGBoost for classification problem on the titanic dataset, in this article we will do regression on Melbourne Housing Dataset using the Extreme Gradient Boosting from the Caret Package available in R. We will predict the prices of houses and will try to get the best accuracy by avoiding over fitting issues.

Predicting the Prices of Melbourne Houses using XGBoost from Caret in R

0 comments:

Machine Learning on Titanic Dataset using R

After performing some data analysis on Football News Guru Data using R, I came back to one of the most fundamental dataset extracted from the Titanic disaster, you can find it on Kaggle from the name “Titanic: Machine Learning from Disaster”. The purpose of this article is to extent the Data Science Dojo Tutorial to increase the accuracy of the model.


Machine Learning on Titanic Dataset using R

2 comments:

Sequence to Sequence model for NLP

Sequence to Sequence model is one of the most effective model for Natural Language Processing tasks such as translation. There are very few quality papers available on this topic and today I will summarize a very effective paper titled as “Sequence to Sequence Learning with Neural Networks” written by the Google Research Team.

Sequence to Sequence model for NLP

0 comments:

Increasing Performance using Multi-Layer LSTM

In this tutorial, we will introduce multi-layer LSTM to increase the performance of the model. Multi-layer can also be thought as multiple LSTM units. The perplexity will be decreased slightly as compared to single LSTM unit, explained in the previous article.

Increasing Performance using Multi-Layer LSTM

0 comments:

Bigram Based LSTM with Regularization

In the previous LSTM tutorial, we used a single character at a time, now the bigram approach is to predict a character using two characters at a time. This tutorial will also introduce regularization technique known as Dropout in RNN.

Bigram Based LSTM with Regularization

2 comments:

Recurrent Neural Networks (LSTM) Tutorial

Recurrent Neural Networks are one of the most used ANN structure in text and speech learning problems. The purpose of RNN is to work well when the input is in sequence and varies in length, the speech and text are the examples of such input.

Introducing Recurrent Neural Networks (LSTM)

0 comments:

Text Mining Tutorial using Word2Vec (Continuous Bag of Words)

Continuous Bag of Words also known as CBOW is another Word2Vec technique used to find the relationship among the keywords. It is actually the opposite of the previous technique skip gram model. We will find out how it is different and how it impacts the performance on the same dataset.

http://www.tensorflowhub.org/2017/01/word2vec-skip-gram-model-tensorflow.html

0 comments:

Understanding Text Corpus using Word2Vec (Skip Gram Model) - Tutorial

Understanding a text corpus is really hard for a computer considering the old learning styles, they just learn the things but are not familiar of how the words really work with respect to other words (independent from the context).


Understanding Text Corpus using Word2Vec (Skip Gram Model)

0 comments:

Transposed ConvNets Tutorial (Deconvolution)

We have discussed on the convolutional neural networks in the previous tutorials with examples in tensorflow, in this, I will introduce the transposed convolution also called as the deconvolution or the inverse of convolution with the experiments I did in tensorflow.

Experimenting with Transposed ConvNets (Deconvolution)

0 comments:

Tuning parameters to improve Accuracy in ConvNets

So far we have performed the convolutional neural networks with regularization techniques in tensorflow but we have not focused on improving the accuracy of the Convnets. In this tutorial, we will tune the parameters to get top accuracy as possible, I was able to get 96%, lets see how much accuracy can you get.

Tuning parameters to improve Accuracy in ConvNets

0 comments:

Introducing Dropout and L2 Regularization in ConvNets

In this tutorial, I will show how dropout and L2 regularization affect the convolutional neural networks. Its same as we did in the simple neural network. You will see a minor increase in the accuracy but this is not our main concern here. The main concern here to avoid overfitting using these two techniques. 

Introducing Dropout and L2 Regularization in ConvNets

0 comments:

Introducing Pooling in Convolutional Neural Networks

In the previous tutorial, we started with Convolutional Neural Networks, in this I will share the concept of pooling and how it works to improve the system of the CNN. Also this will include the upgraded version of previous python code with maximum pooling function.

Introducing Pooling in Convolutional Neural Networks

0 comments:

Introducing Convolutional Neural Networks

In this tutorial, I will introduce Convolutional Neural Network that are computationally expensive but much accurate than the simple Neural Network. The previous python code will be converted into Convolutional Neural Network.

 Introducing Convolutional Neural Networks

0 comments: