No sexism, racism or bias machine learning, please

No sexism, racism or bias machine learning, please

  • Post author:
  • Post category:Insights

I published a piece on Fujitsu’s global blog site in October 2017 with some thoughts on machine learning, and thought I’d share it here as well. The original Fujitsu piece is at this link.

As trials, deployments and general awareness of machine learning and AI [see here for background] increase, more and more reports about its use will manifest. On the upside, the ability to assist medical diagnoses, the revolutionised opportunities for autonomous vehicles, substantially improved wind turbine efficiency, prevented sink holes and the possibility to find missing persons are just some of the examples of the good that is already being realised. From time to time, there are also reports of the not so good or causes for concern. Examples of this include facial recognition that fails to work on black faces, and when Facebook had to shut down two AI bots that were communicating with each other in their own language that couldn’t be understood by humans.

The rapid growth in AI has been driven by two factors – the ability to gather and store vast amounts of data, along with the necessary computing power that is capable of consuming and, ideally, making sense of that data. As such there’s been an explosion in specific technology platforms that can enable taking these solutions from idea to action.

The scope I’d like to consider here is “the use of machine learning and artificial intelligence to make decisions based upon individual personal characteristics, that directly impacts upon those individuals.” Such examples require very careful consideration. It’s evident that these environments can exhibit bias in a number of ways. The two most obvious causes of this being first if the system is programmed or developed in a way that embeds bias (for example by not ensuring diversity of thought and consciously considering where bias could be inadvertently embedded). Or second, by loading in the data sets that themselves have bias as they have arisen from previous biased human behaviour.

Let’s take the area of recruitment as an example. Companies have a choice. If they use AI to increase the efficiency of their business their perspective may well be that they’re not actually concerned about bias. As long as the technology itself reaches the same conclusions that a member of staff would  ‘manually’ then there will be minimal business process flow impact with maximum efficiency.

Alternatively, companies could view this as an opportunity to create change for their business – if you like “resetting” the business so as to remove bias.

If companies see AI as an opportunity to make changes then they’ll need to be really clear about two things. Firstly, what is the change that is desired for the future state that is being targeted. Secondly, a very realistic appraisal of the situation as it is today. Understanding the difference between the two can allow one to look at existing business processes and understand the extent to which the data that is already being gathered may be biased. Steps will then need to be taken to ensure that the bias is not learned or repeated. Also, it will enable an informed determination of the types of people who are charged with specifying, designing, implementing and testing such systems.

More widely however these situations are increasingly throwing up areas of ethical challenges for businesses. As such, I believe that measures such as the following are urgently needed. They ensure that the use of machine learning and artificial intelligence to make decisions based upon individual personal characteristics (that positively impact individuals) develop in acceptable ways. It has to be possible to be assured of that in terms of design and ongoing operation:

  • National, regional and international governing bodies need urgently to consider how ethics are to be managed.
  • Similarly a certification standard should be developed that, for example, sees if the appropriate measures have been taken to ensure bias is not built in, and cannot be subsequently learned.
  • These systems should be specified so that they have to explain how they have arrived at a decision or recommendation. This may seem like an oxymoron because the whole principle of machine learning and artificial intelligence systems is that it enables us to consume vast amounts of data and reach conclusions that human beings do not have the capacity to comprehend. However it is that lack of comprehension that presents a significant risk. Consider as a comparison how a medical specialist consultant would have to explain their diagnosis of a complex medical condition to a fellow consultant in order to gain a second opinion.
  • In addition to requiring the rationale of a decision or recommendation to be presented, there should be a set of tests that are defined by the business. They should be applied to all outputs so that the business may satisfy itself that the results can be trusted. This could be enhanced by having machine learning or artificial intelligence agents that are using different data sets to test the outputs of other environments. For example, a local, national, or regional government could make demographic and employment data sets available that could become a baseline standard for comparisons.
  • Corporations should consider how they wish to undertake ‘decision sampling’, whereby from time-to-time tasks are undertaken by humans and also by the system, with the outputs being compared for any undesirable outcomes, or innovative new learnings.

Whilst the above may appear potentially burdensome for the supplier industry, if the issues are not considered carefully there could be a massive credibility challenge which may be insurmountable.

When I started my career in programming I did so on naval command and control systems and on message handling systems. Two adages that were used then continue to apply here. The first is ‘garbage in, garbage out’; which doesn’t need any explanation really. The second was more about how to ensure a technology solution was going to work: ‘People. Process. Technology.’ When a new technology or concept comes along it is easy to lose track of the fundamentals. For all that machine learning and artificial intelligence systems can do for us, they have to be properly integrated into business process and the people aspects planned for and managed.

We should beware of believing or expecting these systems to do our thinking for us. They can enable us to make wiser and more informed decisions, it is for us to ensure that happens. The potential to make a massive leap forward in identifying and eradicating bias in organisational decision making is significant. We need to seize that opportunity and use it wisely.


About Steven

Steven has extensive experience in strategic executive leadership having led large business units at Fujitsu. Steven has had full and operational delivery responsibility for $1bn annual revenue business, including sales / growth, of full-service range (consultancy and change programmes, to operational IT services) to multiple clients. Leading business through changes in strategic direction, crisis management, transformational turnarounds especially those delivering business critical services to clients such as Public Sector / National Government. Steven engages well with C-suite executives and senior stakeholders, including in previous roles with UK Government Cabinet Ministers.

Contact

Feel free to reach out if you’d like to discuss these or related topics. Follow me on Twitter, or connect with me on LinkedIn.

Follow on social media