Implementing Artificial Bee Colony Algorithm to Solve Business Problems
Artificial Bee Colony Algorithm (ABC) is an optimization algorithm based on the intelligent foraging behavior of a honey bee swarm.
We’ll be looking at the ABC algorithm in detail through its purpose, implementation…
Top Data Science Tools for 2022
The list includes tools for beginners and experts working in the data field. These tools will help you with data analytics, maintaining databases, perform machine learning tasks, and finally help you generate a report. The tools are divided into five categories: DuckDB, Postgres, Beautiful Soup, Psycopg2 and Zyte. The list also includes tools that have helped me handle new and unseen datasets faster.
How to Make Pandas Functions More Useful
How to Make Pandas Functions More Useful
Making the most of it.
Pandas is arguably the most commonly used library in the data science ecosystem. Its functions make the complex data cleaning and analysis tasks simple and easy.
However, we often use the Pandas functions with their default settings which prevent us from making the most out of them. In most cases, parameters make a function more…
Customizing GPT-3 for Your Application
Developers can now fine-tune GPT-3 on their own data, creating a custom version tailored to their application. Customizing makes Gpt-3 reliable for a wider variety of use cases and makes running the model cheaper and faster. With fine-tuning, one API customer was able to increase correct output from 83% to 95%. By adding new data from their product each week, another reduced error rates by 50%.
Hyperparameters in Deep RL
In part 6 of the Hands-on Course on Reinforcement Learning, we will learn how to tune hyperparameters in Deep RL. In part 5 we built a perfect agent to solve the Cart Pole environment, using Deep Q Learning. We will use the best open-source library for hyperparameter search in the Python ecosystem: Optuna. The code for this lesson is in this Github repo.
Three Interpretability Methods to Consider When Developing Your Machine Learning Model
This article presents 3 interpretability techniques that you might need to consider when developing your machine learning model. These methods are SHAP(SHapley Additive exPlanations), LIME( Local Interpretable Model-agnostic Explanations), and Anchors. In addition to providing examples for each method, we compare and contrast the three methods, discuss their limitations and provide papers and references for further readings.
Understanding the Outputs of Multi-Layer Bi-Directional LSTMs
In this short tutorial, I break down the outputs of Multi-Layer Bi-Directional LSTMs, with an example of how to do so in PyTorch.
In the world of machine learning, long short-term memory networks (LSTMs) are a powerful tool for processing sequences of data such as speech…
Merging Multiple Datasets at Scale
Using AdventureWorks data (released under the MIT license), consider the impact of promotions on internet sales. Unfortunately, the data seems to be bad. Whether or not a promotion was in place, the discount amount is always zero and the sales amount is the same. Fortunately, a promotion dataframe exists that contains information on each promotion and the discount percentage. This data will need to be joined to the sales data and an actual sale price calculated.
How to Create a Dataset for Machine Learning
The entry barrier to the world of algorithms is getting lower by the day. But high-quality data is. still a scarce resource. Even having access to quality datasets for basic ML algorithm testing and ideation is a challenge. The sanest advice would be to start with simple, small-scale datasets which he/she can plot in two dimensions to understand the patterns visually and see the inner workings of the ML algorithm.