Aglae is a responsible ai.
Aglae.ai adheres to the Title VII of the Civil Rights Act of 1964 by prohibiting various discriminatory employment actions based on race, color, religion, sex, or national origin.
Specifically, it prohibits:
Hiring and Firing:
Discrimination in hiring practices or termination of employees based on any of the protected categories.
Compensation and Benefits:
Disparities in pay, benefits, or other forms of compensation due to discriminatory reasons.
Promotion and Advancement:
Denial of promotion or advancement opportunities based on protected characteristics.
Job assignments and Work conditions:
Discrimination in job assignments, work duties, or workplace conditions based on the protected categories.
Training and Development:
Unequal access to training programs or professional development opportunities.
Harassment:
Allowing a hostile work environment through harassment based on protected characteristics, including sexual harassment.
Retaliation:
Retaliating against an employee for filing a complaint, participating in an investigation, or opposing discriminatory practices.
AI FOR HUMANS
Energy Balance:
Every interaction counts. We promise that the energy invested by candidates will be well aligned with real placement opportunities. Every conversation with Alpha is a meaningful step toward a professional future.
Complete Transparency:
Candidates know at all times that they are interacting with Alpha. They have control over the content and intensity of the exchanges. If Alpha isn’t enough, a human immediately takes over.
Unmatched Efficiency:
We use technology to make recruitment more efficient, faster, and more transparent than ever. Alpha guides Candidates toward the most crucial conversations, maximizing every effort invested.
Diversity and Inclusion:
We believe in the power of diversity. We are committed to offering equal opportunities to everyone, without discrimination. Every candidate is welcomed with respect and consideration, regardless of their background or origins.
ETHIC COUNCIL
Our Ethics Committee is composed of members representing their respective fields of expertise. This allows us to evaluate our work on AI through the lens of different perspectives and professional specialties.
- Data Scientist: Loukman Eltarr. He wrote many articles about AI, machine learning etc. and he took part in fairwares international workshop on equitable data and technology
- Lead Product Manager: Julie Crauet. She wrote many articles about user feedbacks and products and took part in meetup
- Head of Staffing: Taina Teheiura. She talked about the AI benefices in the recruitment field.
- Care & impact manager: Anais Turpin. She organized customer first Hackaton in Gojob office
At Aglae.ai, we are committed to using AI ethically. We oppose all forms of discrimination, and our ethics committee ensures that our products and systems respect these values. Together, we’re working for responsible technology that benefits everyone.
The Ethics Committee meets once a month to review the KPIs outlined in the monitoring section.
- To monitor the reliability and ethicality of our models and systems
- To analyze user feedback (internal use in Gojob and external use for our clients)
- To evaluate each project according to the 7 identified pillars
ETHIC RULES
PILLARS | APPLICATION TO AGLAE.AI |
Human Factor and Human Control: AI systems should promote equitable societies by serving humanity and fundamental rights, without restricting or undermining human autonomy. | Focus on user feedback and create systems that enhance rather than replace human judgment. (cf monitoring chapter and gouvernance & monitoring). |
Robustness and Safety: Trustworthy AI requires algorithms that are safe, reliable, and robust enough to handle errors or inconsistencies throughout the lifecycle of AI systems. | Regularly test and update systems to mitigate risks and improve performance. |
Privacy and Data Governance: Citizens must have full control over their personal data, and their data should not be used against them for harmful or discriminatory purposes. | Implement transparent data policies and allow users to manage their data preferences. |
Transparency: The traceability of AI systems must be ensured. | Provide accessible information about algorithms and decision-making processes. |
Diversity, Non-Discrimination, and Fairness: AI systems should take into account the full range of human capabilities, skills, and needs, and their accessibility should be guaranteed. | Regularly review algorithms for bias and adapt them to promote fairness in job matching. |
Societal and Environmental Well-Being: AI systems should be used to support positive social developments and enhance sustainability and ecological responsibility. | Engage in partnerships that foster community development and eco-friendly practices. |
Accountability: Mechanisms should be established to ensure accountability for AI systems and their outcomes, subjecting them to a responsibility to account. | Create channels for feedback and hold regular audits to assess accountability. |
FAIRNESS
Our algorithm is designed to challenge traditional approaches to the job searching process, prioritizing equity over discrimination. This means that it evaluates candidates not on their origin, education or career path, but on their unique potential. That is Aglae.ai’s revolution in the recruitment process. Secondly, our unique job repository has been established to broaden candidates’ possibilities, blending online repositories and our rich database based on +400,000 contracts. With this system, our workers can get unexpected opportunities and exciting career paths, without limiting themselves to their usual field. Our algorithm also acts as a coach. After each assignment, it suggests other suitable jobs and provides training courses to help workers progress in their careers. It’s their personal coach that makes sure their learning continues. Ultimately, what makes our algorithm so impactful is its alignment with our philosophy: to have a positive impact in the world, create a fair, inclusive working environment, broaden professional horizons, and encourage constant skill development. Our algorithm is a faithful reflection of these aspirations. When AI meets human values, the dream of building a better world with technology really is within reach.
More details on the methodology on this scientific paper we have produced :
TRANSPARENCY AND CITIZENSHIP
Aglae is providing information to assist the public in understanding Aglae’s matching model that produces a match score. Aglae is providing this information regardless of whether the model is subject to New York City law or not.
https://www.sciencedirect.com/science/article/pii/S266665962200018X
DATA SECURITY
Data Security
Data is one of the core asset of our customers. Aglae has built a strong and safe architecture in order to guarantee 4 essential elements of Data : Privacy, Integrity, Security and Soverignty
Data privacy: No one besides our customer can access the data
Data security: we have built a separate data hosting environment (its on Microsoft Azure vs whereas all Gojob data is on GCP).
Microsoft Azure is building with Aglae a safe and secure data safe.
Strong data security process, external audits and penetration test guarantee your data are safe
Data sovereignty: Data is hosted in your country (US cloud only for US customers. European cloud only for european customers)
Data Integrity: Read and write data rule are set together with our customers.
REGULATION
Aglae’s definition of ethics takes a crucial aspect into account: respecting regulations and laws. Therefore, we commit to complying with current laws and continuously ensure that we remain in compliance over time.
- Our legal texts of references
- GDPR
Aglae is compliant with European Union General Data Protection Regulations (GDPR) as applied to Aglae and Gojob, and supports customers’ own compliance programs through product features, integration, and configuration options, as required by our customers.
- EU-U.S. DPF
Aglae complies with the EU-U.S. Data Privacy Framework (EU-U.S. DPF), the UK Extension to the EU-U.S. DPF, and the Swiss-U.S. Data Privacy Framework (Swiss-U.S. DPF) as set forth by the U.S. Department of Commerce.
- CCPA
Aglae is compliant with the California Consumer Privacy Act (CCPA).
- OFCCP
Aglae supports record keeping standards established by the Office of Federal Contract Compliance Program.
- New York City Bias Audit
New York City Local Law 144 of 2022 regulates “automated employment decision tools.” Aglae is compliant with Law 144.
- Title VII and Responsible AI in Employment
Aglae uses AI techniques and other methods that mitigate bias against any individual based on their protected class : race, color, religion, gender and national origin
- Our certifications
At Aglae, we use external bias audits to build trust with stakeholders, clients, and the public, while demonstrating our commitment to transparency and fairness.
RELATED RESOURCES
- AI
IA in service of recrutement By Henri Bouxin (head of Data Aglae.ai & Gojob)
AI in service of humanity by Nicolas Boutin (CTO Aglae.ai & Gojob)
AI a subject of reflexion by Pascal Lorne (CEO & founder of Aglae.ai & Gojob)
No-code to augment the collaborator, not to replace them by Thibaut Watrigant
Webinar: Generative AI in Service of Temporary Employment Agencies and Recruitment Firms
- Fairness
Ethics researches & papers