We already have an ethical framework for AI (opinions)

For the third time in my career as an academic librarian, I faced a digital revolution that fundamentally and rapidly changed our information ecosystem. First of all, when the Internet is widely used through browsers. The second is the emergence of Web 2.0 for mobile and social media. The third (and currently) is becoming more common from AI, especially the generation of AI.
Again, I hear a kind of fear-based thought and the inevitability and scolding against critics who are “resisting change” by AI supporters. I hope I hear more voices advocating for the benefits of AI in specific situations and acknowledging AI risks in specific situations and emphasizing reduced risks. Academics should use AI as a tool for specific interventions and then evaluate the ethics of those interventions.
Care is required. The burden of building trust should be borne by AI developers and companies. While Web 2.0 delivers on the web for a more interactive collaborative experience on a network centered on user-generated content, the promise is not without social costs.
Looking back, it can be said that Web 2.0 cannot meet the basic standards of welfare. It is associated with the global rise of authoritarianism, promoting polarization and extremism in promoting polarization and extremism, promoting the quality of our attention and thinking, in the growing and serious mental health crisis and the spread of loneliness. The Information Technology Department has won our deep doubts. We should do everything we can to learn from past mistakes and to prevent similar results from our ability.
We need to develop an ethical framework to evaluate the uses of new information technologies, especially AI, that can guide individuals and institutions to consider using, facilitating and licensing these tools to achieve various functions. The main factors about AI complicate moral analysis. The first is that interaction with AI often continues to exceed initial user transactions. The information in this transaction can be part of the system’s training set. Second, there is often a lot of transparency about what AI models do on the surface, so it is difficult to evaluate. We should ask for as much transparency as possible from the tool provider.
Academics already have an agreed set of ethical principles and processes for evaluating potential interventions. The principles in the “Belmont Report: Ethical Principles and Codes for Conserving the Discipline of Human Research” are our approaches to human research, and if we believe that the potential use of AI is an intervention, effective applications can be adopted. These principles not only make academics beneficial for the use of AI, but also provide a framework for the design requirements of technology developers.
The Belmont report clarifies three main moral principles:
- Respect people
- Welfare
- justice
“Respect for People”, as it has been translated into US regulations and practiced by IRB, has multiple aspects including autonomy, informed consent, and privacy. Autonomy means that individuals should have the right to control their own participation and should not be forced to participate. Informed consent forms require people to have clear information so that they can understand their consent. Privacy means that a person should control and choose how his personal information is collected, stored, used and shared.
Here are some questions that we may ask to evaluate whether specific AI interventions respect autonomy.
- For users, are they interacting with AI? This has become increasingly important as AI is integrated into other tools.
- Is what AI produces obvious?
- Can users control how the AI collects its information or the only option to not use the tool?
- Can users access basic services without interacting with AI? If not, it may be mandatory.
- Can users control how AI uses the information they generate? This includes whether their content is used to train AI models.
- Is there a risk of over-dependence, especially in the case of design elements that encourage psychological dependence? From an educational standpoint, is using AI tools for specific purposes likely to prevent users from learning basic skills and thus relying on models?
Regarding informed consent, is there information about what the model is doing both sufficient and in the form of a person that lawyers or technical developers can understand? Information must be provided to the user about which sources will be collected and what data will occur from and what that data will occur.
Privacy violations occur when someone’s personal data is revealed or used in unexpected ways, or when information is considered private inference. Re-identification of the research subject is dangerous when there is enough data and computing power. Given that “data identification” is one of the most common risk-reducing strategies in human subject research, and for the purpose of research repeatability, an increasing emphasis on publishing data sets is a moral concern area that needs attention. Privacy emphasizes that individuals should control their private information, but how to use that private information to evaluate the second main principle – ensure.
Welfare is a general principle that these benefits should outweigh the risk of harm and should be as mitigated as possible. Benefits should be evaluated on multiple levels, including individuals and systems. Welfare principles require us to pay special attention to those who are vulnerable because they lack complete autonomy, such as minors.
Even when making personal decisions, we need to consider potential systemic hazards. For example, some vendors provide tools that allow researchers to share their personal information to generate highly personalized search results – improving research efficiency. As the tool is a picture of the researchers, it may continue to refine the results with the goal of not showing what it does not work for the researchers. This may benefit individual researchers. But, at a systematic level, will the boundaries between various discourses become harder if this practice becomes ubiquitous? Researchers who receive similar scholarships will show increasingly narrow views of the world, focusing on research and prospects similar to each other, while researchers in different discourses will show a separate view of the world? If so, would this be the power to deprive interdisciplinary or fundamentally novel research or to exacerbate the bias of discipline confirmation? Can this risk be mitigated? We need to develop a habit of thinking about potential impacts outside of the individual to create relief.
Some uses of AI have many potential benefits. It does have the potential to rapidly advance medicine and science, for example, the amazing success of the protein structure database alphafold. The corresponding potential of rapidly developing technology can serve the common good, including in our struggle with the climate crisis. The potential benefits are transformative and a good moral framework should encourage them. The welfare principle does not require no risk, but we should determine the use of benefits for a large purpose, and we reduce personal and systemic risks. Risks can be minimized by improving tools such as preventing them from hallucinating, spreading toxic or misleading content, or providing inappropriate advice.
Interest issues also need to pay attention to the environmental impact of generating AI models. Since models require a lot of computing power, electricity uses them to tax our collective infrastructure and helps contaminate. When analyzing a particular purpose through a moral lens of interest, we should ask whether the proposed use provides sufficient benefits to justify environmental harm. It can be said that using AI for trivial purposes can be said to be a test of welfare.
Judicial principles require that people and populations who take risks should also receive benefits. With AI, there are major fairness issues. For example, generating AI can be trained in data that includes our current and historic biases. Models must be tested strictly to see if they create biased or misleading content. Similarly, AI tools should be carefully questioned to make sure they do not work better for some groups than others. Inequality affects the calculation of interests and depends on the interests of use cases, which may become immoral in use.
Another consideration regarding the principles of justice and AI principles is the issue of fair compensation and attribution. It is important that AI should not undermine creative economies. Furthermore, scholars are important content producers, while academic coins in the field are cited. Content creators have the right to expect their work to be used in good faith, to be cited, and to be properly paid. As part of autonomy, content creators should also be able to control whether their materials are used in training concentrations and should at least continue to be part of author negotiations. Similarly, the use of AI tools in research should be cited in academic products; we need to develop standards regarding the inclusion of suitable conditions in the methodological section and citations and may be granted a common well-known state of AI models.
I think the principles outlined above in the Belmont report above are flexible enough to be further and rapidly developed in this area. Academics have a long history of using them as a guide to ethical assessments. They provide us with a common basis where we can morally promote the benefits of using AI to the world while avoiding the types of hazards that will poison commitments.