When you first start driving you are less experienced and, sometimes, more reckless. As you age, you gain more experience (and sense) and it becomes less likely that you’re involved in an accident. However, this trend won’t continue forever. When you reach old age your eyesight may deteriorate or your reactions may slow. Now, as you age, it becomes more likely that you’re involved in an accident. This means the probability of an accident has a non-linear relationship with age. Finding and incorporating relationships like these can improve the accuracy and interpretation of your models.
For some, the term Artificial Intelligence can provoke thoughts of progress and productivity. For others, the outlook is less positive. Many concerns such as unfair decisions, workers being replaced, and a lack of privacy and security are valid. To make things worse, many of these issues are unique to AI. This means existing guidelines and laws are not suitable to address them. This is where Responsible AI comes in. It aims to address these issues and create accountability for AI systems.
When we talk about AI, we usually mean a machine learning model that is used within a system to…
With 11,768,848 comments, ‘Dynamite’ by BTS is the most commented video on YouTube. Suppose a BTS member wanted to know how these listeners felt about the song. Reading a comment per second, it would still take him over 4 months. Luckily, using machine learning he could automatically label each comment as positive or negative. This is known as sentiment analysis. Similarly, through online reviews, survey responses and social media posts, businesses have access to large amounts of customer feedback. Sentiment analysis has become essential to analyse and understand this data.
Today marks one year since I posted my first data science article on Medium. It did surprisingly well and that initial success gave me a lot of motivation. I’ve posted 11 other articles since. Unfortunately, it became obvious I’d fallen victim to a bit of beginner's luck when the others did not do as well. Even so, I’ve still managed to write and post articles consistently. This is because I am not only motivated by views but also by the many other benefits of writing.
The first benefit is that it has helped me to learn more about data science…
If you’ve ever applied for a technical role you’ve probably sent the company a link to your GitHub profile. The information on this profile can give a good indication of your coding ability and fit within a team. The downside to all this information is that it may take a recruiter a long time to assess it. To save time, machine learning could potentially be used to automatically rate your coding ability.
At first, the concept of an unfair machine learning model may seem like a contradiction. How can machines, with no concept of race, ethnicity, gender or religion, actively discriminate against certain groups? But algorithms do and, if left unchecked, they will continue to make decisions that perpetuate historical injustices. This is where the field of algorithm fairness comes in.
In this article, we will explore the concept of model bias and how it relates to the field of algorithm fairness. To highlight the importance of this field, we will discuss examples of biased models and their consequences. These include models…
The side effects of medication can depend on your gender. Inhaling asbestos increases the chance of lung cancer more for smokers than non-smokers. If you are more moderate/liberal your acceptance of climate change tends to increase with higher levels of education. The opposite is true for the most conservative. These are all examples of interactions in data. Identifying and incorporating these can drastically improve the accuracy and change the interpretation of your models.
This US election has brought with it high tensions, unfounded fraud allegations and, most importantly, some great visualisations. Well, important to data scientists at least. It seems like you can’t look anywhere without seeing some novel way of presenting election results. So why not add a few more to the mix? In this tutorial, you will learn how to create some of your own visualisations using Python.
Should we always trust a model that performs well? A model could reject your application for a mortgage or diagnose you with cancer. The consequences of these decisions are serious and, even if they are correct, we would expect an explanation. A human would be able to tell you that your income is too low for a mortgage or that a specific cluster of cells is likely malignant. A model that provided similar explanations would be more useful than one that just provided predictions.
By obtaining these explanations, we say we are interpreting a machine learning model. In the rest…
By adding powers of existing features, polynomial regression can help you get the most out of your dataset. It allows us to model non-linear relationships even with simple models, like Linear Regression. This can improve the accuracy of your models but, if used incorrectly, overfitting can occur. We want to avoid this as it would leave you with a model that does not perform well in the future.
Risk Data Scientist — Building credit risk and fraud models for the man. Exploring AI topics for myself.