Monday, July 31, 2023

Utilizing Invitational Education Theory to Adopt Ethical Artificial Intelligence Policy

Should we fear being closer than expected to the singularity? As noted in Costaldo's (2023) article:  ‘I hope I’m wrong’: Why some experts see doom in AI - The Globe and Mail, Chat GPT was able to recognize the intended humor and provide a comprehensive analysis of the author’s. What does it mean that surveyed teacher education Graduate students were unable to either identify the humor or comprehend the meaning of the Fox News quip noted within the article?

 Do you fear that the Terminator will be part of Earth's evolution? From a rational and ethical perspective, what needs to be done to optimize Artificial Intelligence (AI) so it does not end up seeing humans as a threat to its existence? Too often, technology is typically formulated based on game-theory, which identifies winners and losers. To counter this, should the development of ethics in computer science become an immediate moral imperative? 

What happens when technology proceeds based on a game-theory approach? The early 20th Century Eugenics Movement proceeded unchecked by ethics related to Human Rights.  The belief in racial purity fueled the misguided belief in Aryan superiority, which was a foundation of the Nazi Party.  As an aftermath of two world wars whereby neither ethics nor human rights were observed, the UN Declaration of Human Rights (1948) was developed and provided a foundation for the Universal Declaration of Bioethics and Human Rights (2005). 

So, is it prudent to proactively develop a Universal Declaration of Computer Science Ethics? Would such a universal declaration encourage technological innovation while minimizing a technological singularity whereby humans are perceived as a threat to that technology's existence? Should ethics in computer science become a moral imperative? If so, should there be an intergovernmental panel to examine the ramifications of artificial intelligence? Such an endeavor will require more than an addendum to a current document.

Given its emphasis on the five institutional domains: People, Places, Policies, Programs and Processes; can utilization of Invitational Education theory ensure Intentionality, Care, Optimism, Respect, and Trust (ICORT) are the elements that lead people’s development of ethical AI policy?  Yes! Utilization of Invitational Education theory can help ensure that Intentionality, Care, Optimism, Respect, and Trust (ICORT) are integrated into the development of ethical AI policy thereby guiding places, programs, and processes towards safe innovation and utilization of technology. Invitational Education theory focuses on creating a positive and inclusive learning environment. Therefore, by applying this mindset to the context of AI policy development, a more ethical and human-centered approach to technology will be fostered.

The following begins the discussion of how Invitational Education theory and an ICORT mindset can be utilized by stakeholders involved with the five institutional domains to develop a universal declaration that encourages technological innovation while minimizing a technological singularity that perceives humans as a threat:

Intentionality: Invitational Education encourages intentional and purposeful actions. When developing AI policies, it's essential to have a clear intention to prioritize human welfare, safety, and ethical considerations. This means setting explicit goals for the policy that prioritize the well-being of individuals and society as a whole; helping to ensure AI is designed and deployed with a positive impact in mind.

Care: As a fundamental aspect of Invitational Education, care emphasizes the importance of showing genuine concern and empathy towards individuals. Applying care to AI policy development involves considering the potential risks and benefits of AI technology on different groups and ensuring that vulnerable populations are protected. The policy should address issues like bias, fairness, and inclusivity in AI systems.

Optimism: Invitational Education seeks to instill a positive outlook on human potential. In the context of AI policy, this means fostering optimism about the possibilities of AI while being mindful of potential pitfalls. Ethical AI policy should encourage innovation and progress while maintaining a cautious approach to ensure AI is developed and used responsibly.

Respect: As a core principle of Invitational Education, mutual respect among all individuals involved in the learning environment should be promoted. In the context of AI policy development, this involves respecting the diverse perspectives of stakeholders, including AI experts, policymakers, ethicists, and members of the public. Policymakers should engage in open dialogue and consider various viewpoints to create more robust and inclusive policies.

Trust: Building and maintaining trust is essential for an effective learning environment, and the same applies to AI policy. Invitational Education emphasizes creating a climate of trust, whereby individuals feel safe and confident to share their ideas and concerns. Ethical AI policy must be transparent, accountable, and trustworthy to gain public support and confidence in the technology's responsible use.

Integrating Invitational Education theory into the development of AI policies, policymakers can create more comprehensive and ethically sound guidelines that prioritize human values, safety, and well-being. The resulting policies can lead places, programs, and processes towards safe innovation and utilization of technology. This will potentially foster an AI ecosystem that is beneficial and empowering for all. 


To cite:

Anderson, C.J. (July 31, 2023). Utilizing Invitational Education Theory to adopt ethical artificial

            intelligence policy. [Web log post] Retrieved from http://www.ucan-cja.blogspot.com/

References:

Anderson, C. J. (2021). Developing your students' emotional intelligence and philosophical

 perspective begins with I-CORT. Journal of Invitational Theory and Practice, 27, 36-50.

Purkey, W. W., & Novak, J. M. (2016). Fundamentals of invitational education. (2nd Ed)

            International Alliance for Invitational Education. Retrieved from: BOOKS | IAIE (invitationaleducation.org)

Purkey, W. W., & Siegel, B. L. (2013). Becoming an invitational leader: A new approach  

            to professional and personal success. Humanics. Retrieved from:  BOOKS | IAIE (invitationaleducation.org)

 Shaw, D., Siegel, B., & Schoenlein, A. (2013). The basic tenets of invitational theory and

practice: An invitational glossary. Journal of Invitational Theory and Practice, 19, 30-42

 U.S. Department of Education, Office of Educational Technology (2023), Artificial Intelligence

            and Future of Teaching and Learning: Insights and Recommendations. Retrieved

            from: Artificial Intelligence and the Future of Teaching and Learning (ed.gov)

 Welch, G. & Smith, K. (2014) From theory to praxis: Applying invitational education beyond

schools. Journal of Invitational Theory and Practice, 20, 5-10