Latest News Archive

Please select Category, Year, and then Month to display items
Previous Archive
04 December 2023 | Story André Damons | Photo SUPPLIED
AI and machine Webinar 2023
The University of the Free State (UFS) held an online public lecture which was presented by Prof Hussein Solomon, Senior Professor of Gender and Africa Studies at the UFS. Prof Emma Ruttkamp-Bloem, Head of the Department of Philosophy at the University of Pretoria (UP), was the respondent.

Artificial intelligence (AI) is accelerating science and in certain cases has made the giant leaps in science possible. Together with machine learning, AI is transforming the scientific method and has already left a mark on every stage of the scientific process.

This was the response from Prof Emma Ruttkamp-Bloem, Head of the Department of Philosophy at the University of Pretoria (UP) to the question: Is AI the future of research? Prof Ruttkamp-Bloem was responding at a University of the Free State (UFS) public lecture. The online lecture with the description: Is AI the future of research? Experiences of co-authoring a book with machine-generated summaries, was presented by Prof Hussein Solomon, Senior Professor in the Centre for Gender and Africa Studies at the UFS. 

Prof Solomon, an expert in terrorism and who is the author of several books on the topic, shared his experiences working on a machine-generated book.

Still rudimentary

In his presentation, Prof Solomon said his hybrid book, The Spectre of Islamic Terrorism: Comparative Insights: A Machine-Generated Literature Overview, was a combination of man and machine and was most useful in providing an overview of existing literature with hyperlinks which allowed one to dive deeper. 

“It saved time going through huge volumes of literature and was also useful since one was exposed to literature one normally does not read. But it’s still rudimentary and the publications were only from SpringerLink so I do not think there is a chance that it would replace human researchers for now,” said Prof Solomon.

The book came about after he was approached by Palgrave Macmillan in June 2023 to co-author or edit a book as they regard him as an expert in terrorism. The deadline to complete the book was end of July 2023. 

“I was immediately freaked out because I still recall using a typewriter for my thesis and to write a book in a month was just something I didn’t think was feasible. Still, I thought, why not, and immediately saw the advantage given the vast amount of literature on terrorism.

Having settled on the title, he provided the algorithm with the designated terrorist groups, Al-Qaeda, Boko Haram, Islamic State, the Taliban and Hezbollah. These will all later become chapters in the book.

According to Prof Solomon, the algorithm had to understand, and he needed to convey that there are different groupings within the larger terrorist groups, different groups in different regions and different spellings. “In order to ensure it was comparative, I further asked the algorithm to look at each group under the following headings; its regions, its kidology, its areas of operations, its modes of attack, the personalities and leaders involved, who funds them and counter-terrorism responses. In this order. Now I have to admit that at this level the material generated was not as tight and as focused as I would have liked it to be.”

Got what wasn’t asked for 

In the first draft, Prof Solomon explains, the algorithm produced chapters which were exceptionally long, and others were incredibly short. This needed to be rectified. Some articles were much weaker than others and needed to be removed and others needed to be added. Within the chapters articles needed to be moved around to ensure there was a common thread. 

Prof Solomon says the most interesting aspect about this experience was the fact that the algorithm gave him articles that he did not ask for. He, for example, never asked the algorithm for anything about the Kurdistan Workers' Party (PKK), yet it produced an article on the PKK which was so important that he decided to make it a centrepiece. He also had not asked the algorithm for issues of sexual violence but still got articles on this topic as well.  

“What was also learned is that the algorithm provided me with articles from journals I would have not looked at. I would never have looked at an article from a biology journal while researching terrorism, but it actually seemed quite profound. For example, Al-Qaeda and the Islamic State have adopted a less centralised and dispersed organisational structure which has been labelled as a network and this has allowed both terrorist groups to spread with ease across borders and has made it considerably harder for government to take action against them.”

The good, the bad and the ugly

Prof Ruttkamp-Bloem said in her response: “In recent decades there have been some concerns about scientific progress slowing down. But with AI and machine learning we can expect to see orders of magnitude of improvement of what can be accomplished because there is a certain set of ideas that humans can computationally explore and there is a broader set of ideas that humans with computers can address. But obviously there is a much bigger set of ideas that humans with computers plus AI can successfully tackle.”

She also talked about the good, the bad and the ugly of AI. With the good, she said, AI is poised to transform every industry and there an endless list of opportunities and advantages in using and engaging with these kinds of technologies. 

“If I just think of sustainable development and climate change. Think of the role AI played in the development of the COVID-19 vaccine. Think of the role AI would likely play in the future in helping to create personalised medicine, making food production and distribution more efficient, law enforcement and identifying biased. 

“But because of this good, it means that AI technologies are becoming more integrated in every layer of our lives. There is also the issue, and the fact that we need to understand, data that AI algorithms models are training on, is human data and humans are not perfect beings so obviously this will have some kind of impact on what is generated,” she said. 

According to Prof Ruttkamp-Bloem, the bad includes concerns about amplification of inequality, threats to social justice and political stability, the quality and integrity of information, transgression of privacy rights and the rights to mental integrity – a very old right that speaks to the right not to be manipulated. 

“The ugly has to do with the fact the AI technology is used by large and powerful corporations – five main ones in the entire world – to support an unethical business model that is centred on the commodification of personal data with the core purpose of profit-making, and this has been identified as surveillance capitalism.

“The way forward, conscious and care on the one hand should be taken to ensure that the text generated is reliable, accurate and ethical. But at the same time, writers and researchers should explore and experiment and learn from AI writing tools in order to enhance their own writing skills and goals. These two activities must be two sides of the same coin to ensure credible and original research.”

AI poised to play significant role in the future of research

UFS Vice-Chancellor and Principal, Prof Francis Petersen, said in partnering with AI as a co-author, Prof Solomon has forced people to consider an important and perhaps unsettling question, is AI the future of research? “While we recognise that AI is poised to play a significant role in the future of research, enhancing our ability to analyse data, to identify patterns, to problem-solve, the full scope of potential of AI for conducting research, in particular research that rises to the standard of scholarly or scientific research. That remains still unclear. 

“The future of AI holds the potential for continued advancements in various fields of sciences, health care, robotics, finance and other industries. However, the impact of AI on the arts and humanities and those of fields of inquiry that are more interpretive and typically incorporate qualitative methodologies, which is less clear, understandably leaves some scholars feeling uneasy about its potential influence,” said Prof Petersen.

He continued to say that ethical and responsible AI development is of crucial importance. AI does not possess inherit ethics but rather reflects the values embedded by its creators as such. It’s vital for artificial intelligence developers or AI policymakers and society to work together to address the ethical challenges raised by AI.

Click here to watch the webinar.  

News Archive

We use cookies to make interactions with our websites and services easy and meaningful. To better understand how they are used, read more about the UFS cookie policy. By continuing to use this site you are giving us your consent to do this.