PhD on Fairness of AI Software Systems (0,8 – 1,0 fte)

PhD on Fairness of AI Software Systems (0,8 – 1,0 fte)

Short Description

The next generation of enterprise applications is quickly becoming AI-enabled, providing novel functionalities with unprecedented levels of automation and intelligence. As we recover, reopen, and rebuild, it is time to rethink the importance of trust. At no time has it been more tested or valued in leaders and each other. Trust is the basis for connection. Trust is all-encompassing: physical, emotional, digital, financial, and ethical. A nice-to-have is now a must-have; a principle is now a catalyst; a value is now invaluable.
Are you an enthusiastic and ambitious researcher with a completed master’s degree in a field related to machine learning (Computer science, AI, Data Science) or in Electrical Engineering with an affinity for AI and deep learning? Does the idea of working on real-world problems and with industry partners excite you? Are you passionate about using trustworthy AI methods for the next generation of auditing processes, which are increasingly AI-enabled and data-driven? And are you interested in delivering new tools to ascertain the fairness of the next generation of AI software?

We are recruiting a Ph.D. candidate who will develop and validate novel concepts, methods, and tools for monitoring, auditing, and fostering fairness of AI software systems and trial them with industrial partners who work with Deloitte.

Job Description

This vacancy falls under the auspices of the JADE lab, which is the data/AI engineering and governance research UNIT of the Jheronimus Academy of Data Science (JADS), and DELOITTE.  In particular, this position is associated with  JADE’s ROBUST program on Auditing for Responsible AI Software System (SAFE-GUARD), which is financed under the NWO LTP funding scheme with Deloitte as the key industry partner.

The overall objective of SAFE-GUARD is auditing of AI software, it may be further refined in the following more elaborated goal: “Explore, develop and validate novel auditing theories, tools, and methodologies that will be able to monitor and audit whether AI applications adhere in terms of fairness (no bias), explainability, and transparency (easy to explain), robustness and reliability (delivering same results under various execution environments), respect of privacy (respecting GDPR), and safety and security (with no vulnerabilities).”

The industrial setting of the deep involvement of Deloitte will balance the rigour with relevance and ascertain fit with societal requirements and trends, validation with industrial case studies.

Scientific Challenge
Application software developers cannot train-, test- and deploy AI models independent of socio-political, ethical, cultural, and personal context. At the same time, data is not objective: it is inherently reflective of pre-existing social and cultural biases, thus implying that AI (and AI-induced applications) may lead to (unintended) negative consequences and inequitable outcomes in practical settings. This project will develop novel methodologies, including techniques and tools, that can be exploited during audit activity when detecting AI software bias, possibly predicting those conditions and recommending ways to triage them. These aspects render important aspects when developing trustworthy AI software.

Job Requirements

Candidates should:
●    Have a MSc. in Mathematics, Statistics, Computer Science, Computer Engineering, AI or a related discipline;
●    Have a strong interest in data engineering and governance, machine-learning and deep-learning;
●    Have excellent programming skills and be highly motivated, be rigorous and disciplined when developing algorithms and software according to high quality standards;
●    Have good technical understanding of the statistical models used in data science and machine learning;
●    Have knowledge of, or a willingness to familiarize themselves with, current research into machine learning for software engineering trustworthiness evaluation;
●    Have a commitment to develop algorithms that analyze Big Data from software-defined infrastructures as well as AI application code;
●    Be a fast learner, autonomous and creative, show dedication and be hard working;
●    Possess good communication capabilities and be an efficient team worker;
●    Be fluent in English, both spoken and written.

Job description

Please find the full job desciption (in Dutch) on the website of TiU

Information and application

Application can only be done through the online portal of TiU (see button below). Applications via regular email will not be taken into consideration. 

Apply via TiU

Do you want to do cool stuff that matters?

We do cool stuff that matters, with data. The Jheronimus Academy of Data Science (JADS) is a unique cooperation between Eindhoven University of Technology (TU/e) and Tilburg University (TiU). At JADS, we believe that data science can provide answers to society’s complex issues. We provide innovative educational programs, data science research, and support for business and society. With a team of lecturers, students, scientists and entrepreneurs – from a wide range of sectors and disciplines – we work on creating impact with data science. We do this by connecting people, sectors and industries: in the past 5 years we have been working with 300+ organizations on data-related projects. Our main drivers? Doing cool stuff that matters with data. Our location at the former monastery Mariënburg in Den Bosch houses a vibrant campus fully dedicated to data science.

At JADS, you work in an ambitious team of professionals to meet the challenges of tomorrow together. We do cool stuff, that matters. We provide talent opportunities for both scientific staff and support staff. JADS is a human-centered organization; we pay attention to your personal development and value a good work-life balance.

More about working at JADS

contact form

Group 5
Group 6
Group 7