Diversity and technology - liberator or perpetuator?
This article was originally published in the 2017 Summer Edition of AQ.
In the previous AQ we looked at digitalisation and how technology was reshaping the workplace. We examined the impact of digitalisation upon the jobs market and how leadership could better adapt to deal with the new reality. One area we did not discuss, however was the impact that technology could have upon workplace diversity, both as a positive and as a negative.
For example, according to the World Health Organisation, around a billion people have some form of disability. Looking at Europe and North America, this figure stands at 1 in 5 people; since, for many, this disability may make them less able to work, their poverty level is, on average, twice as high. From a purely business perspective, this is a massively untapped resource - UK based disability charity Scope estimates that in the UK alone, if 1 million more disabled people were able work, the UK economy could grow by 1.7%, or £45bn ($64bn).
There is obviously, therefore, a very real financial incentive to engage more disabled people in work, even before one considers the moral and social benefits. When discussing disability in the workplace, one of the key topics will always be 'access', namely the extent to which those with disabilities are able to fully engage with their work, whether it be a physical limitation (such as wheelchair accessible buildings) or other forms of social interaction, such as difficulties with speech.
This is where technology plays such a key role - advances incomputing have transformed the way in which we engage with our colleagues, with remote working increasingly commonplace; VPNs, video conferencing and email have all made working from home a genuine possibility in terms of efficiency. The '9 to 5' cubicle model of work has become increasingly outmoded and with this more inclusive thinking has come an expansion in the potential pool of viable workers.
However, the technological revolution that has empowered and enabled may not be entirely positive when it comes to perpetuating cultural diversity on a broader level. Computer algorithms, which one might assume to be above such human characteristics as racism, homophobia or misogyny, have recently been seen to exhibit these very traits.
In their paper 'Unequal Representation and Gender Stereotypes in Image Search Results for Occupations', researchers based at the University of Washington made a study of occupational images displayed in Google searches, looking for inaccuracies in the nature and number of the images presented. Their results were intriguing, with major discrepancies - type in 'CEO' and only 11% of the images are of women, whereas the actual percentage in the U.S. (upon which the study focused) lies at 27%.
This is just one example of a much more pervasive problem, one which will only become more pronounced as we cede more responsibility to technology, specifically to Artificial Intelligence and the algorithms that lie at its heart. One campaigner, Joy Buolamwini, an MIT graduate, has even gone so far as to start the 'Algorithmic Justice League', to highlight the extent to which facial recognition software, present in everything from camera exposure systems to law enforcement monitoring, has been programmed with the Caucasian facial type as the default setting. The worrying result is an automatic (and largely unseen) level of discrimination at software level.
While such examples as the automatic rejection of Richard Lee's passport application 'for having his eyes closed' (a facial recognition error due to his Asian descent) might be offensive and inconvenient, the increased pervasiveness of machine learning might have a very real long-term impact on our lives. In her 2016 book 'Weapons of Math Destruction', data scientist Cathy O'Neil writes about the extent of AI usage, with selection decisions being made in the fields of employment, loans, college admissions and prison sentences. Her conclusion is that by their very structure, big data algorithms have a tendency to amplify existing prejudice, increasing inequality in a way that is pervasive and hard to track.
A recent paper by Princeton University based researchers, published in the journal 'Science', also looked at the way in which machine learning systems might in fact be increasing instances of racial or gender bias, since they learn and interpret language from existing repositories of knowledge which might not reflect current, accepted social norms. 'We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from,' said lead author Arvind Narayanan.
From a business perspective, then, the technological revolution has indeed opened up more opportunities for a broader, more diverse workforce and empowered whole new sectors of society. It has, however, also created a range of tools which need to be treated with caution, if we are to ensure that we move forward as a society, rather than reinforcing old prejudices.
Written by: Charles Adams
Posted on March 12, 2018 12:07 PM | Permalink