IBM announced the fourth, 2019 edition of the Science for Social Good program, designed to address social and humanitarian challenges with data and AI.
In 2016, we launched the Science for Social Good initiative at IBM Research as a way of addressing social and humanitarian challenges with data and artificial intelligence (AI). A collaboration between our top scientists and engineers, social change organizations, and fellows from universities, our program seeks to incubate novel solutions to some of the most pressing issues facing humanity.
Since then, we have executed 28 projects, from understanding disease epidemics, to creating antimicrobial peptides, to modeling hate speech, to developing cognitive counselors that guide people out of poverty. We’ve done so by relying on more than 110 of our researchers who have volunteered their unique skills, expertise and passions to these projects. We’ve also contributed 47 scientific papers and awarded 36 Social Good student fellowships. It has been quite a journey.
We are pleased to announce today the fourth, 2019 edition of our program. This year’s projects include:
Partner: IBM Watson Health. Opioid abuse is among the deadliest population health crises in the United States. In most cases, this stems from a prescription . Understanding the patterns of addiction, learning evidence-based guidelines for responsible prescription and creating early warning systems are instrumental when battling the epidemic. The team will couple advanced machine learning and causal inference methods with the wealth of IBM Watson Health data to develop insights and make them available to providers, payers and public health officials.
Partner: CityLink Center. The team will aim to model paths out of poverty for clients of CityLink Center, a non-profit integrated social service provider in Cincinnati, Ohio. In particular, the team will explore modeling social services such as one-on-one counseling sessions and group classes as events associated with a time stamp. Using a unique longitudinal dataset, a causal model of events and outcomes such as employment, wellness, education and housing will be developed that reveals potential transition probabilities and expected times to transition – measures that are meaningful to CityLink Center for operational planning and providing insights to counselors.
Partner: Cures Within Reach for Cancer. A large amount of data suggests that hundreds of off-patent drugs well known in treating non-cancer indications could also be useful for treating cancer. The team will take a systematic approach to find and evaluate all of the evidence on these non-cancer generic drugs. Using natural language processing techniques, the team will analyze scientific literature to uncover the preclinical and early clinical research on these drugs being tested as cancer treatments. Since there are thousands of relevant publications, with this number continuously growing, the team will go beyond just unearthing the publications. They will also develop models to automatically capture key information about each paper, such as the type of cancer studied, the type of study conducted, and the nature of the evidence reported. The eventual goal is to identify the most promising drug repurposing candidates that can then be tested in large-scale randomized controlled clinical trials.
Lighter skinned individuals have the highest risk for developing skin cancer, but the mortality rate for African-Americans in the United States is much higher primarily due to misdiagnosis . In a recent international melanoma detection challenge, machine-learning methods achieved superhuman performance in melanoma detection and it is important that past disparities not be propagated in learned models. The team will develop new methods for making AI-based skin cancer diagnosis models relevant for all populations of the world.
It is well known that machine-learning models can achieve high accuracy for various tasks, but accuracy alone is not a strong enough criterion to earn users’ trust, especially for high-stakes decision making. Several other criteria are also important including: explainability, fairness, robustness to dataset shift, and robustness to adversarial examples. The team will aim to develop benchmarking datasets, baseline models, and a contest for machine-learning researchers to evaluate their models on all five aforementioned criteria. The project may utilize the Python open-source Adversarial Robustness Toolbox and AI Fairness 360 toolkit.