refers to the frameworks, policies and processes that guide the development and deployment of artificial intelligence (AI) systems to ensure they are ethical, transparent, and aligned with societal
values. It encompasses establishing rules and standards to manage AI’s impact on society, addressing issues such as bias, accountability, and safety. For instance, the European Union’s AI Act aims to ensure that AI systems are developed and used responsibly,
imposing obligations on providers and deployers of AI technologies to regulate their authorization in the EU single market. (European Council)