By Maina Waruru |||
African universities need to invest in technical capacity for artificial intelligence (AI) if they are to avoid the risk of ‘data colonialism’’– whereby they are reduced to being mere consumers of data they do not own or control.
Universities need to invest in the capacity of their staff to enable them to generate data they will use in teaching, research and administrative processes using generative AI to ensure the continent is not left behind in the revolution, experts agree.
While AI as a concept came with numerous opportunities and risks for higher education, including ethical and integrity issues, embracing the technology is not negotiable, and one way of doing so is by developing authentic data sets to ensure that universities are not net consumers of foreign-owned content.
Even more important is that the universities develop their own code of conduct for the use of AI to fit local realities, attendees at a webinar heard.
The event, ‘The Ethics of AI and Data in Higher Education’, took place on 10 November 2023 and was hosted by the Association of African Universities (AAU), the University of Nottingham in the United Kingdom, and the Ethical Data Initiative (EDI). It was held as part of celebrations to mark Africa Universities’ Day and the AAU’s 56th anniversary.
Code of conduct is urgent
Despite AI as an industry still in its infancy in Africa, developing a code of conduct for the design and use of AI applications is urgent because the industry is growing fast, speakers unanimously agreed.
This was besides the need to address disparities in access to the technologies by, for example, first supporting the professional development of lecturers by training them to enhance their skills, said Damian Eke, transitional assistant professor in the School of Computer Science at the University of Nottingham in the United Kingdom.
“African universities need to develop policies for AI but ensure that the policies also encourage collaborations with the industry and other higher education actors. This will ensure that we do not operate in silos,” Eke said.
There should also be a policy on training, and on data protection to ensure that it is protected, he added, noting that, while data is part of AI, AI, itself, is not part of data. He said it is important to make lecturers and researchers understand that it would be risky to compromise data ownership.
AI will affect student assessment
“One area in which AI will have a big effect is in the way students are assessed, which also calls for urgent upgrading of AI infrastructure in Africa. This includes computers, internet infrastructure and electricity,” he said.
“Universities in Africa should also build their own internal repositories to preserve, protect, and control their own data. They should not allow external funding for critical AI infrastructure from the Global North, for doing so would allow the donors to control your data,” he told attendants.
With the coming of AI, students are becoming increasingly confused about the importance of integrity in education, said Professor Emma Ruttkamp-Bloem, leader of the Ethics of Artificial Intelligence Research Group at the Centre for Artificial Intelligence Research, or CAIR, in South Africa.
It was, however, the role of universities to teach them how to acknowledge generative AI sources in their work, in the interests of not compromising the importance of integrity in academics, she said. So much is already happening in AI in Africa, but it is important for continental bodies such as the African Union (AU) to come up with policy guidelines on the technology, she urged.
It was possible to localise AI and “simplify” ethical issues for students, local situations, and examples, Ruttkamp-Bloem said, adding that it was important to look at AI beyond just generating knowledge. While data is the “power behind AI”, the ethics of AI are much wider than data ethics, while AI and data are not synonymous.
Learn to use AI responsibly
UNESCO has already paved the way for higher education by developing a policy for generative AI as well as a curriculum to guide universities, something the AU also needed to adopt and to develop the technologies. “Not many African countries have the computing power, but we must not give up, we must soldier on,” she said. Overall, Africans have to learn to use AI responsibly even as they learn the tricks of the technology.
According to Deborah Kanubala, co-organiser of the Women in Machine Learning & Data Science (WiMLDS) Accra-Ghana chapter and an ML researcher at Saarland University in Saarbrücken, Germany, one of the major challenges of AI in higher education lies in integrating data ethics in the curriculum.
Also relatively new is data ethics, which calls for training to understand and awareness creation despite funding and resource constraints that make it difficult to introduce new courses in universities, she added.
Challenging, too, is the necessity to rethink the evaluation of students with the coming of AI applications such as ChatGPT, which raises the risk of plagiarism in academics. The fact that the tool can be used to perpetuate dishonesty in exams and steal other people’s data is a big concern and makes it “become very dicey”. Nevertheless, she noted that universities had no choice but to continue teaching AI tools to students.
It will, however, come in handy in assessing common course classrooms where students even struggle to get seats during lectures, and where one is able to teach but where assessment is very difficult, she noted. In traditional classrooms, it is not possible to personalise learning but, with AI, this will become possible, and it will help make grading students easier, she explained.
Trust is an issue
“However, AI poses challenges in learning in terms of trust where machine learning is concerned and in teaching, especially with regard to automatic grading models,” she added. On the other hand, many researchers are unable to trust decisions made via AI personalised learning tools. This is made worse by the teaching staff’s fear of losing their jobs.
“The solution to all this lies in upscaling and upgrading tutors to become more researchers than lecturers. Finding ways of mitigation and overcoming trust as a barrier to AI technologies is an important step forward,” she recommended.
Also of concern is the fact that there was no single or ideal approach for teaching AI ethics, despite its importance in both research and learning.
Integrate data ethics
Dr Suchith Anand, who is programmes and outreach director at the EDI, said the AAU, the University of Nottingham and the EDI launched a campaign for data ethics in education in the summer of 2023. The campaign aims to highlight the importance of educating researchers and aspiring data practitioners about the ethical considerations involved in collecting, using, reusing, and storing data during their training.
He explained: “The campaign advocates for the integration of data ethics in all higher education courses focused on data science and research. It aims to educate the next generation of data and research professionals about their legal and ethical obligations when it comes to using, reusing, and sharing data.”
Some big tech companies are becoming more powerful than nation-states. “The world’s biggest tech companies are now richer and more powerful than most countries and the rise of AI looks set to increase their influence,” Anand remarked.
Although the use of AI in higher education holds promise in terms of research and development, there are ethical challenges emanating from its application, which may put institutions in disarray. These are the issues institutions of higher learning must consider if they wish to harness the potential of AI while mitigating the risks, including privacy, bias and academic integrity.