The Multiplier Effect: AI and Ethics

Part 9 of 14 in our series on generative AI and organizational dynamics

Create a visual representation that embodies 'The Multiplier Effect: AI and Ethics' focusing on imagery alone and incorporating the Greek symbol for ethics, Εθικά (Ethika). Blend this symbol with elements representing Artificial Intelligence, such as circuitry, digital landscapes, or algorithmic patterns, to illustrate the deep integration of ethical principles in the development and application of AI technologies. The image should convey the importance of grounding AI innovation in ethical considerations, symbolizing the commitment to responsible progress and the enhancement of societal well-being through technology.

FuturePoint Digital is a research-based consultancy positioned at the intersection of artificial intelligence and humanity. We employ a human-centric, interdisciplinary, and outcomes-based approach to augmenting human and machine capabilities to create super intelligence. Our evidenced-based white papers can be found on FuturePoint White Papers, while FuturePoint Conversations aims to raise awareness of fast-breaking topics in AI in a less formal format. Follow us at: www.futurepointdigital.com.

The question of whether the heart of humankind is good or evil is a theme that has been explored by countless philosophers, theologians, and writers throughout history. This question touches on the nature of human morality and has been a central topic in various religious texts, philosophical discourses, and literary works.

In the realm of philosophy, Jean-Jacques Rousseau famously argued that humans are inherently good, a view presented in his work Discourse on Inequality. In contrast, Thomas Hobbes, in Leviathan, posited that humans are naturally selfish and driven by a ruthless struggle for self-preservation.

Theological perspectives also offer varied views. For example, in Christianity, the concept of original sin suggests a fundamental flaw or tendency towards evil in human nature, while other interpretations emphasize the potential for goodness and redemption.

Now the question extends beyond the inherent goodness or evilness of human nature to include the moral compass of artificial intelligence (AI). This stems, in significant measure, from AI’s increasing ability to learn, think, decide, and act for itself, compliments of highly sophisticated machine learning (ML) algorithms, natural language processing (NLP), robotics, etc…

As an illustration of the challenge, if designers and developers arm an AI platform with algorithms that are optimized to perform within certain guidelines at inception, to what extent can these creators guarantee that the platform won’t make undesired adjustments to its behavior as it gathers more data and has more experiences? This autonomous or semi-autonomous capacity poses a new dynamic that separates AI from other products and services, and carries significant implications for legal, regulatory, and ethical frameworks.

This white paper will explore the ethical considerations that arise as artificial intelligence advances, focusing on the implications of their autonomous decision-making capabilities. As AI systems increasingly mimic human cognitive functions, the line between programmed instructions and machine "morality" becomes blurred. We will delve into the challenges of ensuring AI systems operate within ethical boundaries, examining the responsibilities of developers, the impact of AI on societal norms, and the potential need for new legal and regulatory frameworks to govern AI behavior.

Advances in Artificial Intelligence & the Need for Dynamic Ethical Frameworks

Overview of AI Development and Its Increasing Integration into Society

Artificial Intelligence has rapidly evolved from theoretical research to a key driver of technological innovation, deeply embedding itself within the fabric of society. This evolution spans various domains, including healthcare, where AI diagnoses diseases with remarkable accuracy; transportation, with the advent of autonomous vehicles; and the digital realm, through personalized recommendations and virtual assistants. AI's integration into daily life promises efficiency and convenience but also raises significant questions about privacy, job displacement, and decision-making autonomy. The pace at which AI technologies advance and permeate diverse sectors underscores the urgency to understand and harness their potential responsibly, ensuring that they serve to augment human capabilities and improve quality of life without compromising fundamental rights and freedoms (Russell, Norvig 2020; Tegmark, 2017; Ertel, 2018).

The Importance of Ethics in Guiding AI Development and Use

As AI systems become more sophisticated and autonomous, the ethical implications of their development and deployment cannot be overstated. Ethics serves as a crucial guide in navigating the moral dilemmas posed by AI, ensuring that technology aligns with societal values and principles such as fairness, justice, and respect for individual autonomy. Ethical considerations in AI involve scrutinizing the data these systems are trained on to avoid perpetuating biases, ensuring transparency in AI decision-making processes, and safeguarding against unintended consequences that might arise from their use. The dynamic nature of AI technologies demands equally dynamic ethical frameworks that are capable of adapting to new challenges and guiding both developers and policymakers in fostering innovations that not only push the boundaries of what is technologically possible but also uphold and promote human dignity and rights (Bostrom, 2014; Floridi & Cowls, 2019; Jobin, Ienca, & Vayena, 2019).

Understanding AI Ethics

Definition of AI Ethics

AI Ethics refers to the branch of ethics that examines the moral implications and responsibilities associated with the development, deployment, and use of artificial intelligence technologies. It encompasses a wide range of considerations, from ensuring AI systems make decisions in a manner that is fair and unbiased to protecting user privacy and enhancing transparency. AI Ethics aims to provide a guiding framework for the ethical creation and application of AI, ensuring that these technologies contribute positively to society without infringing on human rights or causing harm. At its core, AI Ethics seeks to navigate the delicate balance between leveraging the transformative potential of AI for the benefit of humanity and mitigating the risks associated with its widespread adoption (Ienca, & Vayena, 2019).

Historical Context of Ethics in Technology

The intersection of ethics and technology is not a new concern; it has been a topic of discussion since the advent of early technological innovations. Historically, ethical considerations in technology have focused on the impacts of technological advancements on society, individual privacy, and the environment, as well as on issues of equity and access. As technologies have evolved, so too have the ethical frameworks designed to address them, from the ethics of nuclear power and biotechnology to the information ethics that emerged with the digital age. The development of artificial intelligence, however, presents novel ethical challenges that are distinct in their complexity and scale. The capacity of AI systems to learn, adapt, and make decisions introduces new dimensions to ethical considerations, prompting a reevaluation of traditional ethical frameworks and the development of new approaches to address the unique challenges posed by AI (Mittelstadt, et al, 2016).

The Role of AI Ethics in Contemporary AI Technologies

In the context of contemporary AI technologies, AI Ethics plays a critical role in guiding the responsible innovation and implementation of these systems. With AI's growing presence in critical sectors such as healthcare, criminal justice, and finance, ensuring these systems operate ethically is paramount. AI Ethics influences how AI systems are designed, from the initial conceptualization stage to their development and eventual deployment. It involves scrutinizing the algorithms for biases, ensuring the data used for training AI systems is representative and ethically sourced, and making the outcomes of AI decisions transparent and understandable to users. Furthermore, AI Ethics extends to the governance of AI, advocating for policies and regulations that protect public interests and promote the equitable distribution of AI benefits. As AI continues to evolve, the role of AI ethics in contemporary AI technologies will remain pivotal in steering these advancements towards outcomes that are just, equitable, and aligned with the broader goals of society (Hagendorff, 2020).

Key Ethical Principles in AI

Transparency: The Need for AI Systems to Be Understandable

Transparency in AI is essential to building trust between AI systems and their human users. It entails making the operations of AI algorithms clear and understandable, allowing individuals to grasp how decisions are made. Transparency is crucial not only for fostering trust but also for enabling accountability and facilitating informed consent. When AI systems are used in decision-making processes, from loan approvals to medical diagnoses, users and stakeholders must be able to understand the basis on which these decisions are made. Ensuring transparency involves detailing the data used, the decision-making criteria, and the logic behind AI algorithms in a way that is accessible to non-specialists (Felzmann, et al, 2019).

Justice and Fairness: Avoiding Bias and Ensuring Equity

Justice and fairness are foundational ethical principles that demand AI systems operate without bias and ensure equitable treatment for all individuals. Despite the best intentions, AI systems can perpetuate or even amplify societal biases if they're trained on biased data sets. This can lead to unfair outcomes in everything from job recruitment to legal sentencing. To uphold justice and fairness, developers must diligently audit AI systems for biases and implement corrective measures. This includes using diverse data sets for training, employing fairness-enhancing algorithms, and continuously monitoring AI systems to identify and rectify any instances of unfair bias (Noble, 2018).

Responsibility and Accountability: Assigning Accountability for AI Actions

As AI systems take on more autonomous roles in decision-making, pinpointing responsibility for their actions becomes increasingly complex. Responsibility and accountability in AI ethics mean that when AI systems make decisions, especially those affecting human lives, there must be clarity on who or what entity is accountable for those decisions. This principle ensures that there is a mechanism for redress when things go wrong, and helps maintain public trust in AI technologies. Implementing this principle may involve establishing clear guidelines for human oversight, developing standards for AI governance, and creating legal frameworks that define liability and accountability in the use of AI (Dignum, 2019).

Privacy: Safeguarding Personal Data

Privacy is a paramount concern in the age of AI, as these systems often rely on vast amounts of personal data to function. The ethical principle of privacy dictates that individuals' data should be protected from unauthorized access and misuse. This involves ensuring that data collection practices are transparent, consent-based, and in line with data protection laws such as GDPR. Moreover, it requires that AI systems incorporate privacy-preserving techniques, such as data anonymization and secure data storage, to protect individuals' information throughout its lifecycle (Custers, et al, 2019).

Beneficence: Promoting the Welfare of All Individuals

Beneficence in AI ethics refers to the imperative to use AI technologies for the good of society, enhancing the welfare of all individuals without causing harm. This principle encourages the development and deployment of AI in ways that contribute positively to human well-being, such as improving healthcare outcomes, enhancing educational opportunities, and mitigating environmental challenges. It also involves proactive measures to identify potential harms associated with AI applications and taking steps to prevent them. By prioritizing beneficence, AI developers and policymakers can ensure that AI serves as a tool for societal advancement, bringing about broad-based benefits while minimizing risks and adverse impacts (Floridi, & Cowls, 2019).

Current Ethical Challenges in AI

Privacy and Surveillance: Balancing Innovation with Individual Rights

The rapid advancement of AI technologies has raised significant privacy and surveillance concerns, particularly as these systems become more adept at processing and interpreting vast amounts of personal data. While AI can offer personalized services, enhance security, and improve efficiency, it also poses risks to individual privacy if left unchecked. The proliferation of surveillance technologies, powered by AI, can lead to invasive monitoring of individuals' behaviors and movements, often without their consent or awareness. Balancing the benefits of AI-driven innovation with the need to protect individual rights requires robust legal frameworks, transparent data practices, and mechanisms for individuals to control their personal information and opt out of surveillance programs (Zuboff, 2019).

Autonomous Systems: Ethical Considerations of Decision-making Without Human

Oversight

Autonomous systems, from self-driving cars to autonomous weapons, present unique ethical challenges due to their ability to make decisions independently of human oversight. The delegation of critical decision-making processes to machines raises questions about accountability, responsibility, and the moral implications of actions taken by AI. For example, in the context of autonomous vehicles, ethical dilemmas arise in programming decisions related to accident scenarios—how should an AI prioritize human lives if a collision is unavoidable? Ensuring that autonomous systems operate within ethical boundaries involves embedding ethical principles into their design, establishing clear guidelines for their deployment, and creating accountability structures for decisions made by AI (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Deepfakes and Misinformation: Ethical Implications for Society and Democracy

The emergence of deepfake technology, which uses AI to create realistic but fabricated images, videos, and audio recordings, has introduced new challenges for society and democracy. Deepfakes can be used to spread misinformation, manipulate public opinion, undermine trust in media and institutions, and harass or discredit individuals. The potential for deepfakes to influence elections, incite violence, or cause diplomatic incidents by fabricating events or statements poses a significant threat to the integrity of democratic processes and social harmony. Combating the ethical implications of deepfakes and misinformation requires a multi-faceted approach, including technological solutions to detect and flag synthetic content, legal measures to punish malicious use, and media literacy campaigns to educate the public about the risks of manipulated content (Paris & Donovan, 2019).

Ethical Frameworks for AI

Overview of Existing Frameworks and Guidelines

In response to the ethical challenges posed by AI, several organizations and governing bodies have developed frameworks and guidelines to steer the responsible development and use of AI technologies. Notable among these are the OECD AI Principles and the EU Ethics Guidelines for Trustworthy AI. The OECD AI Principles emphasize values such as inclusivity, transparency, accountability, and the prioritization of human interests in AI systems. They advocate for robust AI that respects human rights and democratic values. Similarly, the EU Ethics Guidelines for Trustworthy AI outline requirements for AI systems to be lawful, ethical, and robust. These guidelines stress the importance of human oversight, privacy, transparency, diversity, and environmental well-being. Together, these frameworks aim to establish a global standard for ethical AI that promotes innovation while safeguarding human rights and dignity (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Comparison of Approaches and Their Applications

While both the OECD AI Principles and the EU Ethics Guidelines share common goals, their approaches and applications reveal some differences. The OECD principles are designed to be globally applicable, providing a broad ethical foundation for AI development across member and partner countries. They are intended as a guide for policymakers to create conducive environments for trustworthy AI. On the other hand, the EU guidelines are more prescriptive, offering a detailed checklist for developers and deployers of AI within the European Union to achieve trustworthy AI. This includes specific assessments for ensuring AI systems' accountability, data governance, and societal and environmental well-being. The EU's approach is more hands-on in its aim to operationalize ethical principles into concrete practices within its jurisdiction (Jobin, et al, 2019).

Critiques and Limitations of Current Frameworks

While these ethical frameworks represent significant strides toward responsible AI, they are not without critiques and limitations. One common critique is that the principles and guidelines are often high-level and abstract, lacking clear directives for implementation. This leaves room for subjective interpretation and potential inconsistency in application across different contexts and jurisdictions. Additionally, there is concern about enforceability. Without binding legal authority or mechanisms for enforcement, compliance with these frameworks may remain voluntary, potentially limiting their effectiveness in regulating AI developers and deployers who choose not to adhere to them. Finally, the rapid pace of AI innovation may outstrip the ability of these frameworks to remain relevant, necessitating continual updates and revisions to address new ethical challenges as they arise. These limitations underscore the need for ongoing dialogue, research, and collaboration to refine and strengthen ethical guidelines for AI (Hagendorff, 2020).

Implementing AI Ethics in Practice

Role of Regulation and Policy in Guiding Ethical AI Development

The role of regulation and policy is paramount in steering the ethical development and deployment of AI technologies. Governmental bodies worldwide are increasingly recognizing the need to create regulatory frameworks that not only encourage innovation but also ensure AI systems are developed in ways that are transparent, fair, and respectful of human rights. Such regulations can mandate standards for transparency, data protection, and accountability, providing a legal backbone against which AI practices can be measured. For example, the European Union’s General Data Protection Regulation (GDPR) sets a precedent for how personal data should be handled, impacting AI systems that process this data. Effective policy-making involves a delicate balance between fostering technological advancement and protecting the public from potential harms, requiring continuous dialogue between regulators, AI developers, and stakeholders (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Corporate Responsibility: Case Studies of Ethical AI in Business

Corporate responsibility plays a crucial role in the ethical development and use of AI. Many businesses are now incorporating AI ethics into their corporate governance structures, recognizing that ethical AI practices are not only a regulatory requirement but also a competitive advantage. Case studies from leading technology companies, such as IBM’s commitment to "Trust and Transparency" principles in AI, illustrate how organizations can lead by example. These companies conduct ethical AI assessments, engage in transparency regarding AI decision-making processes, and invest in diverse and inclusive AI training datasets to minimize bias. By prioritizing ethical considerations, businesses not only mitigate risks but also build trust with consumers and stakeholders, demonstrating a commitment to societal well-being beyond mere compliance (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Education and Awareness: Building an Ethically Aware AI Workforce

Developing an ethically aware AI workforce is essential for the practical implementation of AI ethics. This involves integrating ethics training into educational programs for computer science, data science, and related fields, as well as ongoing professional development for those already in the workforce. Educational initiatives should cover the societal impacts of AI, ethical decision-making processes, and strategies for identifying and mitigating biases in AI systems. By equipping AI professionals with the knowledge and tools to consider the ethical implications of their work, the industry can foster a culture of responsibility and accountability. Furthermore, public awareness campaigns can demystify AI technologies for the general population, empowering individuals to engage in informed discussions about the ethical use of AI in society (Field, et al, 2020).

Tools and Technologies Aiding in Ethical AI Development

A variety of tools and technologies have been developed to aid in the ethical development of AI. These include software libraries that help detect and mitigate bias in AI models, frameworks for transparently explaining AI decisions, and platforms that facilitate the secure and privacy-preserving use of data. For instance, tools like Google's What-If Tool and IBM's AI Fairness 360 offer developers resources to evaluate and improve their AI systems' fairness and explainability. Additionally, privacy-enhancing technologies such as differential privacy and federated learning are becoming increasingly important in developing AI systems that protect individuals' data. As these tools and technologies evolve, they provide practical means for developers to incorporate ethical considerations into every stage of AI system design and deployment, helping to bridge the gap between ethical principles and practice (Veale, & Binns, 2017).

Future Directions for AI Ethics

Emerging Ethical Dilemmas and Areas of Concern

As AI technologies continue to evolve, new ethical dilemmas and areas of concern are emerging, requiring vigilance and proactive engagement from the global AI community. One pressing issue is the development of autonomous decision-making systems, which raises questions about moral agency and the delegation of critical decisions to machines. Another area of concern is the potential for AI to exacerbate social inequalities, as access to advanced AI technologies may become a privilege of the wealthy, further entrenching existing disparities. Additionally, the environmental impact of training large AI models has become a growing concern, highlighting the need for sustainable AI development practices. Addressing these emerging dilemmas requires a multidisciplinary approach, bringing together insights from ethics, law, technology, and social sciences to forge pathways that respect human dignity and promote a fair and equitable society (Crawford, 2021).

The Role of International Cooperation in Standardizing AI Ethics

International cooperation plays a crucial role in the standardization of AI ethics, given the global nature of AI development and deployment. The harmonization of ethical guidelines across borders can ensure consistent standards for AI systems, regardless of where they are developed or used. Organizations such as the United Nations, the European Union, and the OECD are pivotal in fostering dialogue and consensus among member states, facilitating the creation of a cohesive framework for AI ethics that respects cultural differences while upholding universal human rights. This collaborative effort can also address cross-border challenges such as data privacy, cybersecurity, and the equitable distribution of AI benefits, ensuring that AI technologies serve the global common good (Taddeo, & Floridi, 2018).

Anticipating Future Challenges: Ethical Considerations for Advanced AI and Beyond

Looking ahead, the ethical considerations for advanced AI and beyond encompass scenarios that were once the realm of science fiction. The potential development of superintelligent AI systems poses existential questions about humanity's role and control over technologies that could surpass human intelligence. Ethical frameworks must evolve to address the governance of such systems, ensuring they are aligned with human values and cannot act against humanity's best interests. Moreover, the possibility of artificial general intelligence (AGI) introduces ethical considerations about the rights and status of entities with human-like cognitive abilities. As we stand on the brink of these technological frontiers, it is imperative that ethical considerations are integrated into the foundational stages of AI research and development, guiding the trajectory of AI towards beneficial outcomes for all of humanity. Preparing for these future challenges requires foresight, interdisciplinary collaboration, and a commitment to embedding ethical principles into the fabric of AI innovation (Bostrom, 2014).

Ethical Decision-Making in AI Development

Steps for Integrating Ethics into AI Project Life Cycles

Integrating ethics into the AI project life cycle is critical for ensuring that AI systems are developed and deployed responsibly. This process begins at the conceptualization stage, where ethical considerations should inform the purpose and design of the AI system. Developers and project managers should assess potential ethical impacts, including privacy risks, bias, and fairness, as part of the initial design process. During development, ethical guidelines should guide the selection of data sets and algorithms to mitigate biases and ensure transparency. Before deployment, AI systems must undergo rigorous testing to identify and address any ethical issues. Finally, post-deployment, continuous monitoring is necessary to ensure the system operates as intended and to make adjustments based on feedback and evolving ethical standards. Integrating ethics throughout the AI project life cycle requires a commitment from all team members to prioritize ethical considerations at every step (Mittelstadt, 2019).

Stakeholder Engagement and Participatory Design in Ethical AI

Stakeholder engagement and participatory design are essential components of developing ethical AI systems. By involving a diverse range of stakeholders, including end-users, ethicists, community representatives, and domain experts, developers can gain a comprehensive understanding of the ethical implications of AI technologies in various contexts. Participatory design ensures that AI systems are built with a deep understanding of users' needs, values, and ethical concerns. This approach helps to identify potential harms and benefits early in the design process, allowing for the development of more inclusive, equitable, and accountable AI systems. Engaging stakeholders throughout the development process also fosters trust and transparency, critical elements in addressing ethical challenges in AI (Simonsen, & Robertson, 2012).

Monitoring and Auditing AI Systems for Ethical Compliance

Continuous monitoring and auditing are vital for maintaining ethical compliance in AI systems throughout their operational lifecycle. As AI technologies learn and adapt over time, they may develop behaviors that were not anticipated during the design phase, necessitating ongoing oversight. Implementing regular audits of AI systems can help identify and mitigate emerging ethical risks, such as biases in decision-making processes or unintended privacy infringements. These audits should be conducted by independent third parties to ensure objectivity and should assess the system's adherence to established ethical guidelines and regulations. Additionally, monitoring tools that use explainable AI techniques can provide insights into the AI's decision-making process, helping to maintain transparency and accountability. Together, monitoring and auditing are crucial practices for ensuring that AI systems continue to operate within ethical boundaries and remain aligned with societal values (Raji, et al, 2020).

Conclusion

Recap of the Significance of Ethics in AI

The significance of ethics in artificial intelligence (AI) cannot be overstated. As AI technologies become increasingly integrated into every aspect of our lives, the ethical considerations surrounding their development and deployment become paramount. The journey through AI ethics highlights the importance of addressing biases, ensuring privacy, upholding fairness, and maintaining transparency and accountability. Ethical AI is not just about preventing harm; it's about actively contributing to human well-being, respecting human dignity, and enhancing societal values. The discussions on stakeholder engagement, participatory design, and continuous monitoring underscore the collective effort required to navigate the ethical landscape of AI. By prioritizing ethics, we can harness the transformative potential of AI in ways that are beneficial and equitable for all (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Call to Action for Businesses, Developers, Policymakers, and the Global Community

The responsibility for ethical AI extends beyond developers and technologists; it encompasses businesses, policymakers, and the global community. Businesses must adopt ethical guidelines and practices in their AI projects, emphasizing the importance of integrity and social responsibility in their operations. Developers are tasked with embedding ethical considerations into the DNA of AI systems, ensuring they are designed with fairness, accountability, and transparency in mind. Policymakers must create and enforce regulations that guide ethical AI development and use, protecting individuals and societies from potential harms. The global community, including academia, civil society, and AI users, must engage in ongoing dialogue, advocacy, and education to promote ethical standards in AI. Together, these efforts form the cornerstone of a future where AI serves humanity's best interests (Conversation with OpenAI’s ChatGPT, March 18, 2024).

Final Thoughts on the Path Forward for Ethical AI

The path forward for ethical AI is both challenging and promising. It requires a concerted, multidisciplinary approach that balances technological innovation with ethical imperatives. The future of AI ethics lies in the continuous evolution of ethical frameworks, the development of advanced tools for ethical AI implementation, and the fostering of a global culture of ethical awareness and responsibility. As we advance, it's crucial to maintain an adaptable and inclusive perspective, ensuring that the benefits of AI are distributed equitably and that no one is left behind. By embracing these principles, we can navigate the complexities of the AI landscape, steering technology towards outcomes that enhance, rather than diminish, the human condition. The journey towards ethical AI is ongoing, and it is up to us to shape its direction for the betterment of society (Conversation with OpenAI’s ChatGPT, March 18, 2024).

How might Future Point Digital help your organization reimagine the art of the possible with respect to new ways of working, doing, thinking, and communicating via emerging technology? Follow us at: www.futurepointdigital.com, or contact us at [email protected].

About the Author: David Ragland is a former senior technology executive and an adjunct professor of management. He serves as a partner at FuturePoint Digital, a research-based technology consultancy specializing in strategy, advisory, and educational services for global clients. David earned his Doctorate in Business Administration from IE University in Madrid, Spain, and a Master of Science in Information and Telecommunications Systems from Johns Hopkins University. He also holds an undergraduate degree in Psychology from James Madison University and completed a certificate in Artificial Intelligence and Business Strategy at MIT. His research focuses on the intersection of emerging technology with organizational and societal dynamics.

References

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

Custers, B., Calders, T., Schermer, B. W., & Zarsky, T. Z. (Eds.). (2019). Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases. Springer.

Dignum, V. (2019). Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer International Publishing.

Ertel, W. (2018). Introduction to Artificial Intelligence (2nd ed.). Springer.

Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 1-14. https://doi.org/10.1177/2053951719860542

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI. Berkman Klein Center Research Publication No. 2020-1. https://doi.org/10.2139/ssrn.3518482

Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1

Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2

Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 1-21. https://doi.org/10.1177/2053951716679679

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.

Paris, B., & Donovan, J. (2019). Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence. Data & Society. https://datasociety.net/library/deepfakes-and-cheap-fakes/

Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 33-44). Association for Computing Machinery. https://doi.org/10.1145/3351095.3372873

Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.

Simonsen, J., & Robertson, T. (Eds.). (2012). Routledge International Handbook of Participatory Design. Routledge.

Susskind, R. E., & Susskind, D. (2015). The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford University Press.

Taddeo, M., & Floridi, L. (2018). Regulate artificial intelligence to avert cyber arms race. Nature, 556(7701), 296-298. https://doi.org/10.1038/d41586-018-04602-6

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 1-17. https://doi.org/10.1177/2053951717743530

Wallach, W., & Allen, C. (2009). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.