What is AI bias – and how might it affect LGBTQ people?
In partnership with City Law Firm
By Karen Holden
As technology lawyers we frequently draft bespoke documents and work with AI and machine learning companies.
But since we are always personally overseeing what we do and give personal support, we are confident that our approach is informed, unbiased and focused. However, as we start to rely more on AI as a world or use more tech in our business, removing human oversight, can we be confident this advice or work or production is unbiased, accurate and workable?
As more chatbots such as ChatGPT, Zendesk and Freshwork develop, it is important to note that AI systems can reflect the biases present in the data they are trained on. So if it’s the world wide web, it is clearly going to have conflicting material to draw from.
Concerns about bias in AI systems, including bias against the LGBTQ+ community, have been raised and studied by researchers and organisations. According to an article written by USC Viterbi, researchers from the USC Information Science institute (ISI), studied the ConcenptNet and GenericsKB database to see if their data was unfair. They found bias in up to 38.6% of facts used by AI. If the training data contains biased or unrepresentative information, the AI model may inadvertently learn and perpetuate those biases.
It is important that we mitigate these biases as much as possible. If AI systems used in social media algorithms are biased, it could result in several concerning outcomes:
1 Discriminatory algorithms can result in discriminatory outcomes for LGBTQ businesses. For example, advertising might disproportionately show or promote products and services to certain demographic groups while neglecting or excluding LGBTQ businesses and their target audience.
2 Shadow banning: Shadow banning refers to suppression of the content or image. This is the decision of a social media platform to limit the reach of a post or account. With biased algorithms, this could result in shadow-banning and as a result, reduce visibility and produce underrepresentation of LGBTQ community and businesses. This can make it more challenging for LGBTQ-owned businesses to attract customers, leading to potential disadvantages in terms of customer reach, growth, and revenue.
3 Targeted marketing often relies on demographic data and patterns to choose its consumers and target marketing efforts. If the algorithms are biased, they might overlook or inaccurately categorise LGBTQ individuals, resulting in missed marketing opportunities for LGBTQ businesses. This can hinder their ability to effectively reach and engage with their target audience.
4 Loan and credit discrimination may result in discriminatory outcomes for LGBTQ-owned businesses, directors or persons, leading to unequal access to capital and financial opportunities. This can hinder their growth and sustainability.
5 Employment bias: AI systems are used in various stages of the hiring process, including resume screening and candidate evaluation. If these systems are biased, they can perpetuate discrimination against LGBTQ individuals, potentially affecting the recruitment and employment opportunities of LGBTQ-owned businesses.
6 Justice system: the criminal justice system could be affected. Biased AI may result in unfair profiling, affecting the LGBTQ community’s experiences with law enforcement and legal proceedings. This could lead to serious consequences, including wrongful arrests or unjust sentencing.
Addressing bias in AI algorithms and ensuring the fair and equitable treatment of all individuals, including the LGBTQ community, is crucial.
There are a few things which can be done to tackle this:
1 It’s crucial to collect data directly from the LGBT+ community to avoid relying solely on data generated by majority groups. Conduct a thorough analysis of the training data to identify any biases or imbalances.
2 Inclusive and ethical AI development. It’s essential to involve diverse teams with expertise and real experiences related to the LGBT+ community.
3 Transparent documentation: making clear and open the development process, including data sources, training methods, and any steps taken to address biases.
4 Continuous monitoring. This includes monitoring performance metrics across different demographic groups, regularly revaluating the system’s impact, and addressing any emerging biases promptly.
5 External review and regulation: In the UK we should ideally encourage external review and audits of AI systems by independent organisations
6 Ethical guidelines and standards: Develop and adhere to ethical guidelines and standards for AI development that explicitly address bias and discrimination against the LGBTQ community. These guidelines should emphasize the importance of fairness, inclusivity, and respect for human rights.
It’s crucial to recognise that bias elimination is an ongoing effort, and it requires collaboration between AI developers, data scientists, ethicists, and communities affected by AI systems. By proactively addressing biases, promoting inclusivity, and ensuring transparency, we can work towards developing AI systems that treat everyone fairly and respectfully.
How can lawyers help in the evolving technology to avoid biases?
We believe as lawyers we can and we try to play a crucial role in addressing AI bias and its implications:
1 Compliance. We as lawyers must help ensure that AI systems and their deployment comply with relevant laws and regulations. We will ensure that production has considered and correctly assessed the legal landscape related to data protection, privacy, discrimination, and fair treatment. So before we release documents or give advice or direction this analysis will be undertaken
2 Risk assessment and mitigation. We should be assessing at every step the legal risks associated with AI bias and help develop strategies to mitigate those risks. We can evaluate potential liabilities arising from biased AI systems and assist in implementing risk management frameworks to minimise legal exposure for the developers or businesses using them. This can involve drafting policies, terms of service, and consent forms that explicitly address AI bias and its potential impact.
3 Ethical and responsible AI development. As lawyers we provide guidance on ethical considerations in AI development. We can advise on best practices and help develop guidelines and policies that promote fairness, inclusivity, and non-discrimination. Lawyers can also assist in incorporating ethical frameworks into AI development processes, ensuring that the resulting systems align with legal and ethical standards.
4 Advice, support and policy development. We advocate for the creation and implementation of laws and policies that address AI bias and we strongly suggest external regulations and audits are considered for compliance. Lawyers really must engage in policy discussions, provide expert input, and contribute to the development of regulations or guidelines related to AI and discrimination. Lawyers can also work with organisations and policymakers to raise awareness about the legal implications of AI bias
5 Litigation and remedies. In cases where AI bias leads to discriminatory outcomes or harm, we will continue to represent affected individuals or businesses in legal proceedings.
It is important to stay updated on the evolving legal landscape related to AI bias and discrimination, and HR departments and managers too, as this technology will have more and more of an impact on staff, customers and businesses as it develops. As LGBTQ equality lawyers, we embrace Innovation, but we excise caution and strongly urge businesses to retain human oversight and awareness in what we do with AI. Being modern also means being clever and aware. By combining legal expertise with a deep understanding of AI technologies, we believe as lawyers we can contribute to the development of responsible AI systems and help protect the rights and interests of individuals and communities affected by AI bias.