Blog

10 February 2021

HR Tech: Demystified - Algorithms

Image
Hands moving shapes in to a new order on a table

Is there anything intimidating about algorithms really? In this blog, MHR's Research Director and Chief Architect discuss algorithms in HR Tech and why they can sometimes be misleading.

Algorithms are filtering into our day-to-day lives; they control what adverts we see and what TV shows get recommended to us. They are increasingly reported on in the press – often depicted as big bad algorithms – affecting people’s lives in different ways. Some examples seem to be harmless, but others can have life-changing consequences.

What is an algorithm and why do we need them?

An algorithm can be described as a sequence of rules and decisions applied in order. A cake recipe could be classed as an algorithm, for example. It is normally, however, a word reserved for the world of computer science where the definition applies to logical steps applied in software code. You can think of there being two general types of algorithms – rules-based and optimisation.

Rules-based algorithms are step by step instructions. For example, the steps taken to calculate an employee’s holiday entitlement. The rules are applied in order and any decisions, such as whether to add additional entitlement due to length of service, are well-defined and clear.

Optimisation algorithms build upon rule-based algorithms and they can make decisions on your behalf. There are two types of optimisation algorithms, search-based, and historical data based. For example, when you use your GPS, it knows where you are, and you tell it where you want to go. GPS uses built-in rules to capture your preferences, draw the map, and guide you along. However, when it comes to figuring out the route, it uses a search optimisation algorithm. This kind of algorithm considers your personal preferences, the road and traffic conditions, and the manufacturer’s standards, to decide on the optimal path for you. Historical data optimisation algorithms, on the other hand, use past data to make future decisions. For example, fraudulent expenses detection algorithm looks at the historical fraudulent expenses and uses that as a basis upon which it will judge all future expenses that are submitted. Some incredible developments have emerged from optimisation techniques such as the ability to recognise objects in images, speech recognition and so much more.

Algorithms are increasingly sought after as, with a growing population and extremely competitive markets, more companies are looking to automate rules and decisions so that they can scale out their products to more consumers, reduce operational costs, and react to market changes quickly through automated decisions.

Examples of algorithms in HR and Payroll

In the world of work, algorithms are being developed to help with all aspects of the employee lifecycle, from recruitment to workforce and performance management. But as we’ll discover, caution needs to be taken when using them.

Some good examples of where rule-based algorithms are used successfully are where they increase consistency and efficiency. You don’t want to have to manually calculate your payroll every month and instead, you trust that your payroll system will produce correct and accurate outputs. The rules here are well defined and understood so the benefit of using a payroll algorithm is to speed up the process through automating the calculations and to reduce errors.

When it comes to optimisation algorithms it’s riskier, as they are generally used to make a decision on your behalf. Optimisation algorithms need to be trained on how decisions should be made. Going back to the GPS example, a GPS needs to know your target location and what to include or avoid when it searches for your optimal route. That is your preferences, the road and traffic conditions, and the manufacturer’s standards. Training is even more important when it comes to historical data-based algorithms. Historical data might not be diverse enough, not cover enough scenarios or be just wrong for the task at hand.  Exactly the same thing happens when optimisation algorithms are developed for screening candidates during recruitment. Issues occur when the creator of the algorithm gives it the wrong preferences, historical data or the wrong target location. This causes unintended consequences such as unfairness or inconsistency in the rejection of candidates.

A real-world example of this happened in the US where a personality test was used to screen candidates for entry-level roles at a supermarket. The test excluded particular personality traits and so rejected candidates who were more than suitable to do the job. This solved an issue for the supermarket who had far too many applicants to review manually, and so cut down the interviewee numbers to a more manageable shortlist. But this was incredibly unfair for the rejected candidates, especially when multiple employers in the neighbourhood used the same test and the candidates were unfairly rejected more than once. Aside from being plain wrong, this algorithm meant that any employer who used it was restricting their talent pool and limiting the diversity within their workforce.

Not all biased algorithms are this easy to spot. Consider a shift allocation algorithm, which automates a time-consuming manual task of assigning people to work shifts. On the face of it, it is a good idea. Optimising shift allocation gets very complex very quickly once you’ve taken into account all of the shift constraints, employee working patterns, absences, and the skills and resources required. Automating this task could actually increase fairness as you can imagine some supervisors give the “best” shifts to their favourites in the team. But they can fail when the optimisation algorithm engineers don’t take the diverse needs or preferences of the workforce into account. For example, the supervisor will know who works best with who or who struggles to do particular days because of childcare, both of which could be hard to develop into the optimisation algorithm.

Deciding on when to use which algorithm

We’ve seen in these examples that algorithms can be both helpful and harmful. Ideally, algorithms will help you to run your HR and payroll processes efficiently and aid your decision making, without eliminating the human element that you contribute. As a rule of thumb, rules-based algorithms that are governed by legislation or industry standards are unlikely to be biased, for example, your monthly payroll run or performance management feature such as the check-in process.

However, your organisation needs to be especially careful and sceptical when purchasing software that uses optimisation algorithms. The best thing to do is to ask questions to better understand what is happening under the hood:

  • Can the software vendor clearly explain why they have chosen a certain type of algorithm?
  • Can they describe how it reduces bias from the process?
  • What factors (i.e. data) feed into any decisions made?
  • How did they test it?
  • Can you, as an HR professional, change or override a decision?
  • Can you feedback into the system so that it improves?

Remember, it is important that you use critical thinking and trust your domain experience when you evaluate different software providers.

Thoughts on the future of algorithms in HR

It is clear that algorithms in HR Tech are here to stay. We’ll see more software vendors develop algorithms; some of which will go on to improve the work lives of HR teams, managers and employees, and some of which may cause controversy and will need to be challenged.

Frameworks are currently being developed to assess if algorithms are ethical. Over time, the legislation will catch up and governance will be in place to keep vendors in check. In the meantime, creators of HR tech have the responsibility to assess their own algorithms, and as an HR and payroll professional you too can play a part in this by not accepting them at face value and questioning how they are created.

We hopefully have covered enough for you to now have a better understanding of what algorithms are and why you should care about them, and more importantly shared some advice on how to navigate HR tech that uses algorithms without being overwhelmed or misled.

Blog tags
Zan Kavtaskin and Hannah Jeacock, hosts of the HR Tech Demystified podcast

Zan and Hannah

Zan Kavtaskin is Chief Architect and Hannah Jeacock is Research Director at MHR. The two of them host the podcast HR Tech: Demystified, which you can find on Spotify, Apple, and Google Podcasts. The podcast is designed to simplify the concepts behind HR tech, giving you more confidence when deciding which technology tools are right for you and your people. A new episode is released each month.

Back to previous