Artificial intelligence (AI) regulation is a crucial topic as AI technology continues to roll out in all its different forms, writes Mosa Thekiso, Acting ME (Regulatory), also looking after AI Regulation, at Vodacom Group. We use AI every day. So, what does AI regulation mean for Vodacom, our customers and the future of innovation in Africa?

Video: Mosa Thekiso chatted to Khensani Mthombeni from External Affairs about the regulation of AI.

Research indicates that by 2030, AI could contribute up to US$15.7 trillion to the global economy, with US$1.2 trillion potentially generated in Africa alone, marking a 5.6% increase in the continent’s gross domestic product. As AI’s economic impact grows, the importance of regulating its use becomes increasingly evident. 

Essentially, AI regulation refers to the responsible and ethical use and deployment of AI tools and systems, particularly by companies like Vodacom. It involves considering various factors such as socio-economic issues, consumer rights and human rights, as well as its role in stimulating innovation.

Striking a balance

The core idea behind AI regulation is balancing these factors to ensure that AI deployment is not harmful to users and does not compromise their autonomy. This means deploying and using AI responsibly and ethically.

The European Union (EU) has made significant strides in this area with the introduction of the new EU AI Act, which takes a risk-based approach to regulating AI. The EU is also consumer-centric in its approach, but at the same time, it aims to stimulate innovation and keep pace with advancements in the US and China.

While the African continent will look to the EU and other jurisdictions for best practice, AI regulatory frameworks are still in development, so countries in Africa will ultimately need to take their own approaches locally. This presents a challenge for Vodacom: how to navigate these different regulatory landscapes.

Impact on Vodacom and our customers

For Vodacom, responsible AI deployment is paramount. As we operate across eight markets, compliance with existing laws, even those not specifically designed for AI, is essential. 

When deploying AI-based products and services, we need to think about our engagement with the regulators. For example, when implementing big data solutions in Lesotho, we had to consider unconscious biases as raised by the country’s Central Bank.

Internally, Vodacom needs robust processes to protect consumer rights and ensure ethical AI use. 

Innovating for Africa

It is always exciting for us to go to the markets with new products and services, because we want to see Africans innovating for Africans. We aim to leverage AI to improve the quality of life for Africans, ensuring that AI-driven applications are adapted to local needs and contexts. This is particularly important as AI technologies offer exciting opportunities for innovation in sectors such as healthcare and education.

Even if we look to other countries for best practices, we must come back home, consider our own needs, and localise it in a meaningful way. 

Defining AI

AI encompasses a broad range of technologies, including the Internet of Things (IoT), big data, machine learning, robotics and Generative AI. However, the legal concept of AI has been challenging due to inconsistent scientific definitions. 

The EU AI Act describes AI as “a machine-based system that operates with varying levels of autonomy and adaptiveness, capable of generating outputs like predictions, content, recommendations, or decisions that impact physical or virtual environments”. 

What is the European Union’s AI Act?

In regulating artificial intelligence, the EU AI Act aims to protect EU citizens, foster trust, promote global innovation and ensure AI safety. It applies to AI system providers marketing or using AI systems within the EU, including those outside the EU if their outputs are used within the EU.

The Act will be fully applicable (taking a staggered implementation approach) 24 months post-enactment, around late 2026.

Different rules for different risk levels

The EU AI Act classifies AI products and practices based on their level of risk: 

  • Prohibited practices: These include the use of subliminal techniques or those exploiting vulnerabilities based on age, disability or socio-economic status; biometric categorisation; and untargeted scraping for facial recognition. An example would be voice-activated toys that could encourage dangerous behaviour in children. 
  • High risk: This category is the main focus of the Act. AI systems that are products or safety components of products, such as medical devices or self-driving cars, are classified as high risk. They require rigorous risk management and compliance documentation. 
  • Limited risk: AI systems where transparency is crucial, such as chatbots. Users must be informed of AI interaction, unless it is obvious on the face of it. 
  • Minimal or no risk: AI systems not falling into the above categories have minimal compliance requirements. 

Penalties and monitoring

Penalties for contravening the EU AI Act range from €7.5 million or 1.5% of global revenue to €35 million or 7% of global revenue, depending on the infringement and company size. 

The EU’s AI Office manages the implementation and compliance of the AI Act. 

Click here to read more about the EU AI Act.