Public Gaming International January/February 2025

50 PUBLIC GAMING INTERNATIONAL • JANUARY/FEBRUARY 2025 The relationship between AI and cybersecurity has three dimensions. This is not different in the lotteries, betting and the gambling industry: cybersecurity of AI, which covers AI standardisation and cyber tools in AI; AI in support of cybersecurity, which empowers cybersecurity defenders to combat the use of AI for malicious purposes, which explores AI's potential to create new threats. On cybersecurity of AI, the EU AI Act provides for so-called ‘conformity assessments’ to determine whether high-risk AI systems are cyber compliant with the EU Regulation on horizontal cybersecurity requirements (the Cyber Resilience Act Regulation (EU) 2024/2847), which involve considering “risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks and risks to fundamental rights”. Companies also have a voluntary choice to comply with the cybersecurity scheme of the AI Act, provided in article 15 of the Act. In any case, companies must ensure that, in high-risk AI systems, the instructions for use contain the level of cybersecurity provided for in the EU AI Act. Furthermore, standards in the EU of AI security requirements will become crucial for companies. In the EU, the European Committee for Electrotechnical Standardisation (CEN-CENELEC) was assigned to develop standards in support of the AI Act, with a deadline set for April 2025. In the meantime, the EU Agency for Cybersecurity (ENISA) has published a multilayer security framework for good AI cybersecurity practices with a step-bystep approach (FAICP). It consists of three layers: the groundwork of cybersecurity, focusing on the ICT infrastructure used; AI-specific aspects, focusing on the specificities of the AI components deployed; and sectorial AI, which is specific to the sector in which AI is being used. On AI in support of cybersecurity, a number of companies have started to implement and showcase ways in which AI can be used to enhance cybersecurity, which involves four ways: DETECTION, PREDICTION, ANALYSIS AND THREAT MITIGATION. In particular, for security purposes in AI development, the AI generative models will be critical to enhance security and risk management for lotteries, betting and gambling firms. AI models are rapidly transforming cybersecurity and fortifying IT defenses against sophisticated attacks. Thus, gambling-specific recommendations when using AI in security include implementing application Security Verification Standard (ASVS), conducting regular security testing with AI, developing educational materials with AI, implementing multifactor AI authentication, and deploying DDoS (denial-of-service) protection solutions. These measures enhance security, mitigate cyber risks, and safeguard user privacy and experiences, especially in online gambling. In the lotteries, betting and gambling sectors, key is now to understand what cyber-attacks do, how to protect and defend both the customers and the organisations and how to use AI in the digital area. Specifically, there is a number of ways to counter cyber-attacks through AI, namely : - Through network security trafficking mitigation: protect harmful cyber activities (data trafficking, malware and phishing attacks, illegal content) via AI models specifically designed for gambling platforms - Software security: pinpoint vulnerabilities and enhance software codes - Management security services: enhance education, top internal knowledge on the cyber risks and implementation of a risk management strategy - Human intervention, even when using AI,: with penetration testing: Use anticipative models, to anticipate what attackers will attack next As such, while the AI Act is setting some unnecessary burdens (for instance via the issuance of conformity certificates for high-risk AI by notified bodies) and reduce the potential for innovation development in the EU, there is a positive side with engaging tools, bodies and processes to defend consumers and organisations against cybersecurity. Finally, lotteries, betting and gambling operators must be aware of the growing risk of malicious use of AI. AI can indeed be used itself for cyber-attacks, malware attacks, personal data attacks, deep-learning attacks and more risky behaviours have emerged with the use of AI. Generative AI can also supercharge dark patterns. The lottery community, already strongly involved in the usage of AI, must of course continue to reflect on the new risks created by AI and continue their process of learning and exchanging best practices jointly with the suppliers, as was done recently during the WLA/EL Cybersecurity seminar in Marseille. Lotteries perform a valuable service to society by channeling players to legal, safe and responsible gaming, and this requires them also to stay upfront of new digital developments and incorporate them into their customer offer. The role of AI and its impact on security and responsible gaming is only at the beginning stage. The key to effective application of AI in all these spheres is follow-up. We look forward to working together with you and the community of lottery leaders to ensure AI is integrated into our businesses for optimal positive impact! n For the use of AI for cybersecurity purposes, the EU AI Act provides specifically a risk based approach to combat cybersecurity threats: ‘Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. CYBERATTACKS AGAINST AI SYSTEMS can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.” (extracts from AI Act)

RkJQdWJsaXNoZXIy NTg4MTM=