Artificial Intelligence is increasingly shaping how we work, make decisions, and interact with the world. Often perceived as neutral and Artificial Intelligence (AI) is rapidly transforming economies, institutions, and everyday life. While AI systems are often presented as neutral and objective, they are in fact shaped by human choices, social norms, and historical data — including deeply rooted gender biases and structural inequalities.
Foundation for Management and industrial research in the frame of the project POLICY ANSWERS: R&I Policy making, implementation and Support in the Western Balkans invites you to an peer-learning webinar to critically explore whether AI can be a force for positive change or whether it risks reinforcing stereotypes and discrimination at unprecedented scale. Bringing together voices from industry, international development, and AI governance, the session will examine how bias enters AI systems, why invisible and scalable bias matters for both business and society, and what inclusive and responsible AI looks like in practice.
Rather than one-directional presentations, the webinar is designed as a space for shared learning and cross-sector dialogue. Through short expert inputs and a moderated discussion, participants will reflect on key challenges and opportunities related to gender equality, ethics, and power in the age of automation.
Key discussion topics include:
- How societal biases are embedded in AI systems
- The economic and social costs of biased and non-inclusive AI
- The role of policy, industry, and international organisations in shaping inclusive AI ecosystems
- Leadership responsibility and the importance of AI literacy in decision-making
- Opportunities to design fair and ethical AI, particularly in emerging and developing contexts
Who should attend?
This webinar is relevant for:
- Policymakers and public administration professionals
- Business leaders and technology practitioners
- Researchers, educators, and students
- Development professionals and civil society actors
- Anyone interested in ethical, inclusive, and responsible AI
Save your spot – register now: https://us06web.zoom.us/webinar/register/WN_Y9VQb9GzQc6feHp7_fvYow
Moderator
Gabriela Kostovska Bogoeska
POLICY ANSWERS project, Foundation for Management and industrial research
Speakers
- Sofche Jovanovska, General Manager, Scalefocus North Macedonia
- Aleksandar Filiposki, Project Associate, UNIDO
- Ivana Bartoletti, Global Chief Privacy and AI Governance Officer, Wipro
| Agenda | |
| 11:00-11:05 | Welcome & setting the scene Moderator: Gabriela Kostovska Bogoeska – MIR Foundation – Welcome participants – Purpose of the webinar and peer-learning framingWhy gender, bias, and AI matter now – Brief overview of the agenda and speakers |
| 11:05-11:20 | AI is not neutral: Why this matters for business, policy, and society? Speaker: Sofche Jovanovska – general manager, Scalefocus N. Macedonia Focus: – AI as a reflection of societal values, not objective truth – How gender bias and stereotypes enter AI systems – Why invisible and scalable bias poses economic and social risks – The Western Balkans as a strategic opportunity for “bias by design” reduction – AI literacy as a leadership responsibility |
| 11:20-11:35 | Inclusive AI and industrial development: A global perspective Speaker: Aleksandar Filiposki – project associate UNIDO Focus: – How international organisations approach AI, inclusion, and gender equality – AI governance, capacity building, and responsible adoption – The role of public–private collaboration in shaping fair AI ecosystems – Translating global frameworks into practical development action |
| 11:35-11:50 | Governing AI responsibly: Ethics, power, and gender in the age of automation Speaker: Ivana Bartoletti – Global Chief Privacy and AI Governance Officer, Wipro Focus: – Power dynamics behind AI development and deployment – Gender, democracy, and human rights in AI governance – Why responsible AI goes beyond technical solutions – Embedding ethics, accountability, and inclusion into decision-making |
| 11:50-12:00 | Q&A and Closing words |
