This post is a follow up on my project called Building a self-driving RC car. In Part 1, I used Arduino to hack the controller of the RC car and according to my plan, in Part 2 I had to start working on all the cool Machine Learning stuff. However, this post is going to be a crime against my initial plan, because it is going to be a major update on the hardware part, with a preparation to build Machine Learning algorithms (that’s why this section is called Part 1.5 🙂 ).
AI assistants are a hot topic these days. Chances are that you have already had an encounter with at least one of them, as a user or as a developer. In this post, I would like to talk about a stack of software called Rasa, which you should definitely include in your toolbox if you would like to build conversational assistants yourself.
In short, Rasa NLU and Rasa Core are two open source Python libraries for development of conversational AI. They are packed with Machine Learning and handle natural language understanding and dialogue management tasks. Most importantly, Rasa stack is easy to use, you don’t need massive amounts of training data to get started and it is perfectly suited for production.
Self-Driving RC Ferrari
On my spare time, I read a lot about Machine Learning, AI, Robotics and its applications in our everyday life. While I have always found Machine Learning algorithms awesome, the process of how the code can make a physical device perform specific actions (think of autonomous cars, smart home systems etc.) have always looked like somewhat “next level” to me. I decided to change that perception and learn how to do it myself. I believe that it’s easier to learn difficult things when you make them fun and this is how I decided to make my own Self-Driving RC car!
Over the past few years, Machine Learning has taken a leading role in the discovery of data-driven solutions. Of these solutions, classification is by far one of the most commonly used areas of Machine Learning which is widely applied in fraud detection, image classification, ad click-through rate prediction, identification of medical conditions and a number of other areas. There is a range of different classification algorithms, but over the years single-model approach is being replaced by ensemble methods which combine a number of different algorithms and provide more accurate results than separate models. If you have ever tried to apply an ensemble method on a big data set you should have definitely run into a very common problem – the computation takes hours, sometimes even days or weeks, unless you have a powerful machine.
At the Higgs Boson Data Science competition, everyone’s attention was caught by XGBoost – a new classification algorithm which outperformed all other Machine Learning algorithms used in this competition and brought the 1st place to its developers. By its nature, XGBoost is similar to GBM, because it’s a tree-based approach, but its flexibility, scalability, and exceptional accuracy is superior to GBM and other classification methods.