Algorithmic biases speaking about racial bias in AI in 2020AI has become increasingly inherent in facial and
voice recognition systems. These systems may be vulnerable to biases and errors introduced by their human creators. Notably, the data used to train them can have biases. According to Allison Powell, associate professor at
LSE and director of the Data and Society programme, data collection is never neutral and always involves storytelling. She argues that the dominant narrative is that governing with technology is inherently better, faster and cheaper, but proposes instead to make data expensive, and to use it both minimally and valuably, with the cost of its creation factored in. Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias. In
natural language processing, problems can arise from the
text corpus—the source material the algorithm uses to learn about the relationships between different words. Large companies such as IBM, Google, etc. that provide significant funding for research and development have made efforts to research and address these biases. One potential solution is to create documentation for the data used to train AI systems.
Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions. However, there are also limitations to the current landscape of
fairness in AI, due to the intrinsic ambiguities in the concept of
discrimination, both at the philosophical and legal level.
Racial and gender biases Bias can be introduced through historical data used to train AI systems. For instance,
Amazon terminated their use of
AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates. The performance of
facial recognition and computer vision models may vary based on race and gender. Facial recognition algorithms made by Microsoft, IBM and Face++ all performed significantly worse on darker-skinned women. Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based
pulse oximeter that overestimated blood oxygen levels in patients with darker skin, causing issues with their
hypoxia treatment. In 2015, controversy erupted after a Black couple were labeled "Gorillas" by Google Photos. Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some
U.S. states. The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and
ethnicities. Biases often stem from the training data rather than the
algorithm itself, notably when the data represents past human decisions. A 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.
Injustice in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race. This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as
breast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased. In the justice system, AI can have biases against black people, labeling black court participants as high-risk at a much larger rate than white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally. The
COMPAS program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk". Another example is within Google's ads that targeted men with higher-paying jobs and women with lower-paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies. Large language models often reinforce
gender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles. and
transgender people through misclassification of gender that is misaligned with the person's identity.
Stereotyping Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways. Such stereotypes stem directly from the design of AI systems and programmatic models from which they are trained. Stereotypes that target specific demographics originate from societal biases embedded during the programming process, outdated datasets, and algorithmic architectures that prioritize high-ranking and majority groups rather than underrepresented ones. Research also amplifies user feedback as a primary contributor to stereotypes within AI, as human interactions introduce bias. Additionally, the AI industry is a male-dominant field, primarily young adult males, creating a lack of diversity that cultivates inequalities in AI databases. Word embeddings reveal that the use of "person/people" within AI algorithms displays gender inequality, as it prioritizes men over women rather than neutrality. Since current large language models are predominantly trained on English-language data, they often present Western views as truth, while systematically downplaying non-English perspectives. As of 2024, most AI systems are trained on only 100 of the 7,000 world languages.
Political bias Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data. This skewing of the data is known as algorithmic bias, or when an AI has a predisposition to certain answers based on the data that the AI was trained on. This can create an AI system that is not giving objective answers, but rather skewed answers that lean towards differing ends of the political spectrum. It is said that ChatGPT is a more liberal skewed AI model. It has been found that users are more likely to agree with answers that coincide with their existing political beliefs. Some AI systems try to gauge the political affiliation of the user so that the generated answers can be politically skewed to align with the user, allowing these AI companies and programmers to ultimately get away with their politically biased AI models.
Dominance by tech giants The commercial AI scene is dominated by
Big Tech companies, including
Alphabet Inc.,
Amazon,
Apple Inc.,
Meta Platforms,
Microsoft, and SpaceX. Some of these players already own the vast majority of existing
cloud infrastructure and
computing power from
data centers, allowing them to entrench further in the marketplace. Their current dominance within the market of technology makes it very hard for newer companies to compete and be successful in the long-run within the industry. It has been suggested by competition law scholars that the tech giants of the world may be using their power within the market to foreclose the market from potential competitors and, in turn, charge higher prices to consumers. In light of some of these concerns, governments around the world have been considering and implementing laws that would prevent large companies from continuing or executing these practices.
Electricity consumption and carbon footprint These resources are often concentrated in massive data centers, which require demanding amounts of energy, resulting in increased greenhouse gas emissions. A 2023 study suggests that the amount of energy required to train large AI models was equivalent to 626,000 pounds of carbon dioxide or the same as 300 round-trip flights between New York and San Francisco.
Water consumption In addition to carbon emissions, these data centers also need water for cooling AI chips. Locally, this can lead to
water scarcity and the disruption of ecosystems. Around two liters of water are needed per each kilowatt hour of energy used in a data center. In addition to this, around 2/3 of data centers built are placed in water-scarce regions. Because of this, AI development can compete with local communities and agriculture for water usage. A lot of companies do not fully disclose the severity of their impact on water consumption, which raises ethical concerns on whether these companies are truly for the people or if they are looking for maximum profit. A solution these data centers have implemented is to use zero-water air-cooling systems, but this results in higher carbon emissions and increased electricity usage. Companies have to decide to prioritize the local concern of water usage or the global concern of carbon emissions. With only a single AI query, 16.9mL of water is used, but only 2.2mL goes towards the cooling of the systems. This is less than 15% of the total water used in the interaction, which exemplifies the severity of indirect water usage.
Electronic waste Another problem is the resulting electronic waste (or e-waste). This can include hazardous materials and chemicals, such as
lead and
mercury, resulting in the contamination of soil and water. In order to prevent the environmental effects of AI-related e-waste, better disposal practices and stricter laws may be put in place. However, AI can also be used in a positive way by helping to mitigate the environmental damages. Different AI technologies can help monitor emissions and develop algorithms to help companies lower their emissions. Organizations like
Hugging Face and
EleutherAI have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as
Gemma,
Llama2 and
Mistral. However, making code
open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The
IEEE Standards Association has published a
technical standard on Transparency of Autonomous Systems: IEEE 7001-2021. The IEEE effort identifies multiple scales of transparency for different stakeholders. There are also concerns that releasing AI models may lead to misuse. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do. Furthermore, open-weight AI models can be
fine-tuned to remove any countermeasure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create
bioweapons or to automate
cyberattacks.
OpenAI, initially committed to an open-source approach to the development of
artificial general intelligence (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons.
Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.
Strain on open knowledge platforms In April 2023,
Wired reported that
Stack Overflow, a popular programming help forum with over 50 million questions and answers, planned to begin charging large AI developers for access to its content. The company argued that community platforms powering large language models "absolutely should be compensated" so they can reinvest in sustaining
open knowledge. Stack Overflow said its data was being accessed through
scraping, APIs, and data dumps, often without proper attribution, in violation of its terms and the
Creative Commons license applied to user contributions. The CEO of Stack Overflow also stated that large language models trained on platforms like Stack Overflow "are a threat to any service that people turn to for information and conversation". Aggressive AI crawlers have increasingly overloaded open-source infrastructure, "causing what amounts to persistent
distributed denial-of-service (DDoS) attacks on vital public resources", according to a March 2025
Ars Technica article. Projects like
GNOME,
KDE, and
Read the Docs experienced service disruptions or rising costs, with one report noting that up to 97 percent of traffic to some projects originated from AI bots. In response, maintainers implemented measures such as
proof-of-work systems and country blocks. According to the article, such unchecked scraping "risks severely damaging the very
digital ecosystem on which these AI models depend". In April 2025, the
Wikimedia Foundation reported that automated scraping by AI bots was placing strain on its infrastructure. Since early 2024, bandwidth usage had increased by 50 percent due to large-scale downloading of multimedia content by bots collecting training data for AI models. These bots often accessed obscure and less-frequently cached pages, bypassing caching systems and imposing high costs on core data centers. According to Wikimedia, bots made up 35 percent of total page views but accounted for 65 percent of the most expensive requests. The Foundation noted that "our content is free, our infrastructure is not" and warned that "this creates a technical imbalance that threatens the sustainability of community-run platforms".
Transparency Approaches like machine learning with
neural networks can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. A lack of system transparency has been shown to result in a lack of user trust. Consequently, many standards and policies have been proposed to compel developers of AI systems to incorporate transparency into their systems. This push for transparency has led to advocacy and in some jurisdictions legal requirements for
explainable artificial intelligence. Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to providing reasons for the model's outputs, and interpretability focusing on understanding the inner workings of an AI model. In healthcare, the use of complex AI methods or techniques often results in models described as "
black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards. Trust in healthcare AI has been shown to vary depending on the level of transparency provided. Moreover, unexplainable outputs of AI systems make it much more difficult to identify and detect medical error.
Accountability A special case of the opaqueness of AI is that caused by it being
anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its
moral agency. This can cause people to overlook whether either human
negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent
digital governance regulations, such as
EU's
AI Act, aim to rectify this by ensuring that AI systems are treated with at least as much care as one would expect under ordinary
product liability. This includes potentially
AI audits.
Regulation According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller. Similarly, according to a five-country study by KPMG and the
University of Queensland Australia in 2021, 66–79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully. Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term. The
OECD,
UN,
EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks. This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally. To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. In June 2024, the EU adopted the
Artificial Intelligence Act (AI Act). On August 1st 2024, The AI Act
entered into force. The rules gradually apply, with the act becoming fully applicable 24 months after entry into force. People in the spotlight, especially politicians, are increasingly finding themselves being framed in deepfake videos or images displaying inappropriate symbols or actions (The Guardian). Deepfake technology not only is being used to slander people it is also being used for online scams. In 2022 and 2023, scammers used deepfake technology to mimic the voice of executives, family members, close relatives. The scams resulted in defrauding both businesses and consumers for millions of dollars (CNN).
Increasing use AI has been slowly making its presence more known throughout the world, from chatbots that seemingly have answers for every homework question to generative AI that can create a painting about whatever one desires.
AI welfare In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the
global workspace theory or the
integrated information theory. Edelman notes one exception had been
Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "
explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances. Podcast host Dwarkesh Patel said he cared about making sure no "digital equivalent of
factory farming" happens. In the
ethics of uncertain sentience, the
precautionary principle is often invoked. Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged. These include
OpenAI founder
Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022,
David Chalmers argued that it was unlikely current large language models like
GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.
Anthropic hired its first AI welfare researcher in 2024, and in 2025 started a "model welfare" research program that explores topics such as how to assess whether a model deserves moral consideration, potential "signs of distress", and "low-cost" interventions. According to Carl Shulman and
Nick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of
subjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the
hedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.
Threat to human dignity Joseph Weizenbaum Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", arguing that there are conditions where it would preferable to have automated judges and police that have no personal agenda at all. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as
computationalism). To Weizenbaum, these points suggest that AI research devalues human life. AI founder
John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse", he writes.
Bill Hibbard writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Liability for self-driving cars As the widespread use of
autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed. There have been debates about the legal liability of the responsible party if these cars get into accidents. In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident. In another incident on March 18, 2018,
Elaine Herzberg was struck and killed by a self-driving
Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death. Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary. Thus, it falls on governments to regulate drivers who over-rely on autonomous features and to inform them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies. Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm. The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.
Weaponization Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. They point to programs like the Language Acquisition Device which can emulate human interaction. On October 31, 2019, the
United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of AI by the Department of Defense that would ensure a human operator would always be able to look into the '
black box' and understand the kill-chain process. However, a major concern is how the report will be implemented. The US Navy has funded a report which indicates that as
military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some researchers state that
autonomous robots might be more humane, as they could make decisions more effectively. In 2024, the
Defense Advanced Research Projects Agency funded a program,
Autonomy Standards and Ideals with Military Operational Values (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities. Research has studied how to make autonomous systems with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots." From a
consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set
moral framework that the AI cannot override. There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a
robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop
autonomous drone weapons, paralleling similar announcements by Russia and South Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons,
Stephen Hawking and
Max Tegmark signed a "Future of Life" petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future. "If any major military power pushes ahead with the AI weapon development, a global
arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the
Kalashnikovs of tomorrow", says the petition, which includes
Skype co-founder
Jaan Tallinn and MIT professor of linguistics
Noam Chomsky as additional supporters against AI weaponry. Physicist and Astronomer Royal
Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own."
Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the
Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence. Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects. Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making. The discussions have addressed international humanitarian law, accountability, possible prohibitions and regulations, and the extent of human control required over AI-enabled weapons. A
summit was held in 2023 in the Hague on the issue of using AI responsibly in the military domain.
Singularity Vernor Vinge, among numerous others, has suggested that a moment may come when some or all computers will be smarter than humans. The onset of this event is commonly referred to as "
the Singularity" and is the central point of discussion in the philosophy of
Singularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by AI has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large. Many researchers have argued that, through an
intelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book
Superintelligence: Paths, Dangers, Strategies, philosopher
Nick Bostrom argues that AI has the capability to bring about human extinction. He claims that an
artificial superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled
unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help
humans enhance themselves. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to
Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation. AI researchers such as
Stuart J. Russell,
Bill Hibbard,
Shannon Vallor,
Steven Umbrello and
Luciano Floridi have proposed design strategies for developing beneficial machines. == Solutions and approaches ==