About us

At IBM Research Europe in Dublin, Ireland, our open approach to research, coupled with the deep curiosity of our researchers, creates breakthrough client focused outcomes in domains such as IoT/Digital Twin, AI Security, Privacy, Healthcare, and Cloud. Together, we define and test new technologies on real business problems, discovering new growth opportunities and contributing to the success of IBM platforms and solutions.

IBM Research Europe - Dublin 10 Year Anniversary

IBM is celebrating the 10th year anniversary of its research laboratory in Dublin, Ireland.
Join Dublin Lab Director, Dr. Ruoyi Zhou and many other distinguished guests from IBM, the Irish government, industry and academia on 3 November 2021 for the virtual anniversary.

 

Artificial Intelligence

Explore

AI team

1. AI Applications

The research in AI Applications is bringing AI at large scale to customers in the sectors of Internet of Things, asset management, portfolio management, supply chain, and sustainability. The team is working closely with the AI Applications business unit to innovate traditional business solutions with new AI approaches that help customers work more efficiently, predict problems early, optimize their operation and prepare them for the future of work, Industry 4.0 and 5G.

Automating AI for IoT and Digital Twins
Automating AI techniques for IoT applications is key to scale to the exponential growth of data created by the widespread adoption of IoT to monitor our factories, buildings, and cities. Automating AI for IoT requires a good understanding of AI approaches and the respective IoT domain to explain, predict and optimize causalities with Digital Twins. We are working on neuro-symbolic AI approaches that combine domain knowledge in form of semantic knowledge graphs with machine learning and optimization approaches to enable data scientists and domain experts to automate the creation of Digital Twins. This solution is at the core of different products like the Maximo Data Dictionary, integrating the data from the various services in Maximo for asset management, monitoring and anomaly prediction to automate analytics across applications and support humans in operating large IoT systems more efficiently.
News article: Robots for small-batch production: digital twins create opportunities
Video: A neuro-symbolic AI approach to anomaly diagnosis for robot-based manufacturing settings
Publication: Materializing the Promises of Cognitive IoT: How Cognitive Buildings Are Shaping the Way.

Space and Lease Optimization for Future of Work
Covid 19 has drastically changed our work environment in a very short time with lasting changes for our future of work. In a short time, companies had to plan their response to social distancing constraints and how work from home policies impact their building portfolio. The team is working on space and lease optimization solutions that tackle these problems for IBM TRIRIGA. One big challenge is to consider the many customer specific constraints, e.g. specific organizational structures, which teams should sit close together, various space type requirements and flexible workplace arrangements like work from home or hot desking. The team is using knowledge graph technologies, building on the Data Dictionary, to automatically derive optimization solutions in CPLEX.
Publication: Optimal Seat Allocation Under Social Distancing Constraints

AI for Geospatial Applications
Albert Einstein introduced the concept of the space-time continuum in his famous theory of special relativity. At IBM Research, we develop AI-backed solutions to inform on processes that contain both spatial and temporal dependencies.  Examples include forecasting air pollution in a city, managing food production systems, and supply chain optimization. We develop a variety of machine learning solutions that respond to the specificity of given industry requirements, and the complex interplay of data at different scales. The methods we use include: developing machine learning models that simultaneously consider spatial and temporal dependencies; using physics models to train AI models; and applying transfer learning techniques based on the unique geospatial attributes of a region.
We applied these solutions to the aquaculture industry to help farmers make better decisions and improve productivity.  This management – termed precision aquaculture – is founded on a set of disparate, interconnected sensors deployed within the marine environment to monitor, analyze, interpret and provide decision support for farm operations. Materialization of precision aquaculture depends on IoT technologies to empower management in a chaotic environment subject to the vagaries of oceans and weather. The fundamental goal is to transition the aquaculture industry from ad hoc decision making based on heuristics and intuition to real-time informed decisions backed by AI insights and IoT connectivity.
News article: Sustainable fish farming? Prove it
Video: GAIN Project: Precision Salmon
Publication: Using deep learning to extend the range of air pollution monitoring and forecasting
Publication: Precision aquaculture
Publication: Data Driven Insight into Fish Behaviour and their use for Precision Aquaculture

2. Scaling & Automating AI

We aim to leverage AI to automate complex parts of the AI pipeline while dealing with large scale streaming data from disparate sources developing robust AI systems. The team aims to push the boundaries of these algorithms further by developing AutoML algorithms and pipelines to go directly from data to decision.

Automated Decision Optimization
Automated Decision Optimization (AutoDO) or going from raw data to a decision model for computing the optimal decision policy is a challenging and important problem. It has received considerable attention in recent years, especially in the context of games (e.g., Atari games, AlphaGo). Despite the progress achieved, the problem remains difficult in many instances. We build on top of recent advances in Automated Machine Learning (AutoML) and focus our work on generating an optimized workflow that produces a decision model from input tabular data (as well as time series data) that represents particular instantiations of the target decision problem. 
More specifically, we consider offline reinforcement learning (RL) based approaches and seek to develop a novel optimization framework for RL (also known as AutoRL) that automatically selects optimal decision pipelines and their corresponding hyperparameters. Our proposed framework combines ideas from limited discrepancy search, multi-fidelity optimization as well as Bayesian optimization.
Publication: Searching for Machine Learning Pipelines Using a Context-Free Grammar.

Incremental Learning
The widespread use of sensor and IoT devices is generating huge volumes of time series data in various industries like finance, energy, medicine, manufacturing and others. However, high frequency sensor data is vastly untapped and analytics are usually performed on aggregated data. While many applications in these domains can be studied with aggregated data, certain use-cases like predictive maintenance or very short term forecasting require high frequency data. In this project, we look at the technical challenges in time series and stream management for extreme scale computing that includes high frequency (e.g. second or sub second) data and/or high volume data (in the order of petabytes). 
As part of this effort, aspects of time series summarization techniques, federated learning and edge computing, motif extraction, and incremental machine learning algorithms will be explored in depth. For more information, please visit: https://www.more2020.eu/.

3. Human Centered AI

While our work on automating AI seeks to leverage to power of AI to handle computationally expensive tasks and explore potential solutions, our work in Human Centered AI acknowledges that automation can only go so far in real world task and human input and domain knowledge become vital. We aim to allow Humans and Automated solutions to work together  to solve real world problems. 

Interactive AI
Interactive AI aims to elicit and consume feedback enabling interactive and adaptive learning. This work focuses on developing novel algorithms to enable users to negotiate with AI systems to reach a common objective through a combination of explainability, preference elicitation, adaption and control. The novelty of our approach lies in engaging the human of the AI system not just as a user, but as an active contributor to the solution. It requires exploring new learning and influence paradigms, where the user can inject input into the system. We have focused on developing solutions for recommender systems supporting recommendation critiquing and personalization. We recently extended these efforts to supporting user modifications to machine learning models along with the ability to compare models using interpretability mechanisms that go beyond accuracy. 
Publication: User Driven Model Adjustment via Boolean Rule Explanations.
Publication: What Changed? Interpretable Model Comparison.
Publication: IRF: A Framework for Enabling Users to Interact with Recommenders through Dialogue
Publication: Where can my career take me? harnessing dialogue for interactive career goal recommendations.

Future of Computing

Explore

Future of Computing

The Dublin location is a driver for realizing the future of computing. Below you can find a sample of our projects towards building the next generation of computing by combining bits, neurons, and qubits.

European Funded Programs

  • EVOLVE As data becomes the centre of innovation in modern economy and society, we face new challenges and limitations. Although tremendous progress has happened over the past several years on increasing productivity for data processing over commodity systems and providing new services with Big Data and Cloud technologies, the projected data deluge brings business, consumers, and the society in general to a new frontier. At the centre of EVOLVE lies an advanced HPC-enabled testbed that is able to process unprecedented dataset sizes and deal with heavy computation, while allowing shared, secure, and easy deployment, access, and use.
  • The dReDBox project aspires to innovate the way we build datacenters today, shifting to employing pooled, disaggregated – instead of monolithic, tightly - integrated components. By doing so, the dReDBox proposition has the ambition to lead to significantly improved levels of utilization, scalability, reliability and power efficiency, both in conventional cloud and edge datacenters.
  • UniServer aims to develop a universal system architecture and software ecosystem for servers targeting cloud data-centers as well as upcoming edge-computing markets. UniServer will realize its goal by greatly improving the energy efficiency, performance and dependability of the current state-of-the-art micro-servers, while enhancing the corresponding system software. This will be achieved by exposing the intrinsic hardware-heterogeneity, caused by process variations, to the system-software and enhancing it with new margin/fault-aware runtime and resource management policies.

OpenSource Projects

  • ThymesisFlow is a hardware/software co-designed prototype for hardware disaggregation of compute resources on POWER9 processor systems. It supports disaggregation of memory where compute nodes borrow memory from other nodes in the network and is based on OpenCAPI. The disaggregated memory that a node borrows is mapped at a specific range of addresses in the physical address space of the node and is dynamically hotplugged to its Linux operating system. Application developers need not modify their software, disaggregated memory is automatically visible to the entire system and usable as if it were local memory.
  • Datashim is a Kubernetes framework that is popular with machine learning researchers.The European Bioinformatics Institute for example, is evaluating Datashim in conjunction with Kubeflow, which is an opensource framework for computational workflows. Datashim enables Kubernetes pods to transparently access remotely hosted storage (for example COS buckets) as if they were present locally on the Kubernetes cluster. It also comes with a built-in cache to speed up read/write operations to frequently addressed files.

Other Projects

  • Project Photoresist is about accelerating the discovery of new materials to create products that address global sustainability challenges. Semiconductors are core to much of the technology we use today, and in 2020, they became the subject of regulatory scrutiny. With our end-to-end AI-powered workflow, we were able to scale and handle problems in a way human scientists simply cannot, dramatically accelerating the discovery process. Typically, the discovery of a new molecule takes up to 10 years and $10–100 million. However, during project Photoresist we quickly synthesized three novel Photoacid Generator candidates by the end of 2020 — meeting an environmental challenge in record time. DRL develops the workflow technology that drives the simulations for the Photoresist project by utilizing OpenShift instances on IBM Cloud and on-prem clusters alike.

Media

Blogs

Quantum Computing

Explore

Quantum Computing

The focus of the quantum computing research work at our lab in Dublin is in the area of quantum computing and optimization. On the one hand, this entails work on quantum algorithms for challenging classes of mathematical optimization problems - e.g. arising in supply chain & logistics -, where we are exploring potential advantages of leveraging quantum computing resources. On the other hand, it entails work on optimization approaches for challenging problems arising across the quantum computing stack - in particular approaches for optimizing how we map quantum programs as efficiently as possible to circuits that can be executed on quantum hardware.


Quantum Algorithms for Optimization

Algorithmic Work

Applications in Routing and Logistics

IBM Research ireland data

Applications in Finance

Contributions to Qiskit ML


Optimization for Quantum Compilation

Publication: Best Approximate Quantum Compiling Problems


Irish Quantum Ecosystem / Quantum Computing SW Platform for Ireland

Prior news coverage on the Irish quantum computing consortium

Quantum Consortium
In October 2020, the IBM Quantum team in Dublin started a collaborative project - funded by the Irish government under the Disruptive Technology Innovation Fund - with six partners in Ireland: University College Dublin, Equal1.Labs, Tyndall National Institute, Rockley Photonics, Maynooth University and Mastercard Labs. The goal of this three year project is to develop a quantum computing software platform - built on and extending Qiskit - which integrates multiple qubit technologies being developed in Ireland, to explore the potential of quantum technologies for industry applications and to grow the Irish quantum computing ecosystem.

Quantum consortium


2020 Qiskit Summer Jam - University College Dublin

Pre-Doc Fellowship
In October 2021, IBM Research Europe - Dublin and Trinity College Dublin (TCD) established a new PhD Fellowship program. IBM-TCD Pre-Doc Fellows under this program will conduct research at IBM’s Dublin Research laboratory as part of the IBM Research community, as well as being part of the TCD graduate community, jointly supervised by a researcher at IBM and a Professor at TCD. The first Pre-Doc Fellowship in Quantum computing is in the area of quantum compilation and quantum simulation.

AI Security & Data Privacy

Explore

AI Security

Machine learning (ML) — and in particular deep learning — has been phenomenally successful in achieving close-to-human or better performance on a large number of Artificial Intelligence (AI) tasks including image classification and segmentation, speech recognition, language translation, etc. Of particular interest is the deployment of such models in critical applications such as healthcare, autonomous cars, and security where far-reaching questions around trustworthiness, robustness, security, and privacy of AI remain unanswered. Our team focuses on research and the development of the required open-source tools to answer these questions.

Current Projects

Adversarial Robustness Toolbox (ART)

IBM Research ireland data

The Adversarial Robustness Toolbox (ART) is an open-source project, started by IBM, for machine learning security and has recently been donated to the Linux Foundation for AI (LFAI) by IBM as part of the Trustworthy AI tools. ART focuses on the threats of Evasion (change the model behavior with input modifications), Poisoning (control a model with training data modifications), Extraction (steal a model through queries) and Inference (attack the privacy of the training data). ART aims to support all popular ML frameworks, tasks, and data types and is under continuous development, lead by our team, to support both internal and external researchers and developers in defending AI against adversarial attacks and making AI systems more secure.

Learn about ART:
Available on GitHub
See the latest enhancements here

Meet the developers of ART:
Slack
GitHub Discussions
Monthly Trusted AI LFAI meetings

 

DARPA GARD

We are participating in the DARPA GARD (Guaranteeing AI Robustness against Deception) program, seeking to establish theoretical ML system foundations to identify system vulnerabilities, characterize properties that will enhance system robustness, and encourage the creation of effective defenses. We are developing ART as the central tool for evaluating defenses against deception (evasion and poisining) as part of the GARD program.

DARPA GARD

 

Federated Learning

In a traditional machine learning pipeline all the relevant data is centrally stored in a single location to be accessed for training a machine learning model. However, this is not always possible: data may be gathered in a decentralised manner by users and communicating it to a central server can be infeasible due to privacy restrictions and the associated cost in transmitting large files. Federated learning can offer effective solutions to this problem. In a federated learning scenario users can collaboratively learn a common model while keeping their respective data private. This means data privacy can be maintained more easily as it never leaves the user's device, and also the size of the model is typically much smaller than the dataset size. Additionally, from a servers perspective this distributes the majority of the computation across participating devices.
At our lab we specifically engage with federated learning from a security and privacy perspective, engaging in EU Horizon projects and contributing to internal products.

IBM Federated Learning
IBMFL is a python framework developed to enable federated learning in an enterprise environment. It provides a basic fabric for FL on which advanced features can be added. It is not dependent on any specific machine learning framework and supports different learning topologies, e.g., a shared aggregator, and protocols. It is meant to provide a solid basis for federated learning that enables a large variety of federated learning models, topologies, learning models etc., in particular in enterprise and hybrid-Cloud settings.

MUSKETEER
MUSKETEER is an EU Horizon project for federated learning with an emphasis on privacy preserving scenarios. The massive increase in data collected and stored worldwide calls for new ways to preserve privacy while still allowing data sharing among multiple data owners. Today, the lack of trusted and secure environments for data sharing inhibits data economy while legality, privacy, trustworthiness, data value and confidentiality hamper the free flow of data. MUSKETEER aims to create a validated, federated, privacy-preserving machine learning platform tested on industrial data that is interoperable, scalable and efficient enough to be deployed in real use cases. MUSKETEER aims to alleviate data sharing barriers by providing secure, scalable and privacy-preserving analytics over decentralised datasets using machine learning. Data can continue to be stored in different locations with different privacy constraints, but shared securely. The MUSKETEER cross-domain platform will validate progress in the industrial scenarios of smart manufacturing and health and outcomes are validated in an operational setting. A data economy is fostered by creating a rewarding model capable of fairly monetising datasets according to the real data value.

Robustness of Federated Learning
While FL is an elegant framework for learning models across a variety of clients without explicitly sharing data, the vanilla form of FL incurs significant shortcomings when faced with disruptive scenarios. These could include scenarios where some of the participating clients send corrupted updates owing to accidental malfunction or deliberate efforts where clients supply malicious updates to undermine the learning process. FL systems are also vulnerable to backdoor attacks where the compromised model exhibits unexpected behaviour for inputs containing specific triggers, or membership inference attacks where the attacker tries to assert if a data point has been used for training the learning algorithm.

Our team addresses these challenges and risks by devising methods that analyse and help mitigate the threats against Federated Learning systems. We formally analyse the different threats from the point of view of attack surface, attacker's capabilities and attacker's goals which we leverage to build tools that can help investigate the robustness of Federated Learning applications.

Future of Healthcare

Explore

Future of Healthcare

Computational Maths for Digital Health
Our team develops algorithms and applies them to healthcare data. For example, we solve inverse problems based on spatio-temporal fluorescence data, which we obtain from videos of endoscopies during which a fluorescent tracer is given, and we build a machine learning pipeline on what we learn from their solution. This allows us to even look underneath the surface of the imaged tissue, and to answer questions about the pathology and extent of growths. We also apply control theory and reinforcement learning to the management of chronic pain: based on patients' behavioral patterns (such as mobility, sleep, and past stimulator settings), new settings for a Spinal Cord Stimulator (SCS) are suggested to reduce future pain. Our exploratory science projects include the development of novel filtering algorithms (these estimate values from noisy or unavailable measurements by leveraging applied functional analysis, probability, optimal control and numerics) and system identification (which is the estimation of aforementioned mathematical models from measurements of a system's behavior).
Publication: Artificial intelligence indocyanine green (ICG) perfusion for colorectal cancer intra-operative tissue classification
Publication: Perfusion Quantification from Endoscopic Videos: Learning to Read Tumor Signatures
Publication: Recovering Markov models from closed-loop data

IBM Research Dublin


Protecting and Automating Vital Health and Social Programs Delivery with AI
In order to deliver Health and Social Care programs at population-scale, Governments require automation - e.g. to safeguard the integrity of health and social care services, ensuring that those in need receive vital resources, and to check citizens' eligibility to determine what specific benefits they may be entitled to and in what quantities.  While a program's policy intent is set out in legislation, what citizens actually experience in their everyday lives is the automated execution of such legislation through software and code. 
AI has enormous potential to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules. Part of broader disruptive initiatives towards digital government, our research aims to design new paradigms to automate aspects of policy rules processing, where humans collaborate with AI to correct/validate extracted actionable rules and ensure that they faithfully represent the original policy intent. The combination of deep learning, NLP, ontologies, Knowledge Graphs and standards can empower the collaboration between stakeholders to model complex interpretable and executable policy decisions, and the validation of policy and its digital expression, which in turn will facilitate the production of better-quality, fairer policies.
Publication: Towards protecting vital healthcare programs by extracting actionable knowledge from policy
Publication: Learning Insurance Benefit Rules from Policy Texts with Small Labeled Data
Publication: Benefit Graph Extraction from Healthcare Policies

IBM Research Dublin


AI & Analytics for Coordinated Health and Social Care Outcomes
Our teams are addressing the complex challenges facing health and social care systems globally. By developing new AI & analytics capabilities at the intersection of Health and Human Services we aim to find the levers that drive positive health and well-being outcomes and relieve resource burdens. There are two major areas of work: 

  1. Social Determinants of Health (SDoH): SDoH are a set of conditions or environments, in which people are born, live, learn, work, play, and worship, that affect a wide range of health and quality-of-life outcomes and risks. Examples include housing, transportation, nutrition and financial security. In recent years, health and social care systems have increasingly turned their attention to SDoH and whole-person care, particularly as examples emerge on the value of nutrition, transportation or socialization support in avoiding preventable hospital readmissions and associated downstream costs. SDoH information is complex; it may be held across multiple disparate sources and be structured in a variety of ways (e.g. natural language or geographical coordinates). SDoH information also lacks standardization and may include biases which makes it difficult to analyse and derive insights from. The downstream impacts of SDoH and social programs are currently not well understood across the health continuum. Bringing data sources together, from healthcare, social programs and claims data, can help unlock new knowledge that allows providers to best tailor services to clients (e.g. identify those that would benefit the most), improve health outcomes, or reduce costs. Our research has focused first on identifying, extracting and organising SDoH information through terminologies, knowledge graphs or by using embeddings and AI techniques on different sources of data and published evidence; and secondly on developing new machine learning techniques and Bayesian analytics to create the next generation of predictive models in this complex domain.
    Publication: Social Determinant Trends of COVID-19: An Analysis Using Knowledge Graphs from Published Evidence and Online Trends
    Publication: Discovering New Social Determinants of Health Concepts from Unstructured Data: Framework and Evaluation 
    Publication: ProACT - A Digital Platform to Support Self-Management of Multiple Chronic Conditions: Findings in Relation to Engagement during a one-year Proof-of-Concept Trial 
  2. Risk prediction and stratification for Digital Integrated Care Solution: Traditionally, the approach to digital health based Integrated Care has been fragmented. The World Health Organization describes integrated healthcare systems as designed to manage and deliver health services so that ‘clients’ receive and perceive a continuum of health promotion, protection and disease prevention services, as well as diagnosis, treatment of long-term care, rehabilitation and palliative care services, through different levels and sites of care within the health system according to their needs. The team works on developing AI models to advance person-centric risk stratification, condition management and understanding of the impact of digital solutions using variables describing the clinical/medical and social/behavioural aspect of the individual. The goal is to provide an explainable outcome for risk assessment and decision support to inform key stakeholders of the impact of any implemented digital solutions and interventions on the overall healthcare system.
    Publication: Building a Risk Model for the Patient-centred Care of Multiple Chronic Diseases
    Publication: The Human Behaviour-Change Project: harnessing the power of artificial intelligence and machine learning for evidence synthesis and interpretation - Implementation Science
    Publication: Knowledge Extraction and Prediction from Behavior Science Randomized Controlled Trials: A Case Study in Smoking Cessation

IBM Research Dublin IBM Research Dublin

Exploratory Science

Explore

Exploratory Science

Physics-informed AI

In 2021, Gartner added physics-informed AI (PIAI) to its list of emerging technologies that promise to bring disruptive innovation. At IBM Research, we have been working on physics-informed AI for many years. While traditional AI approaches promise to identify the correct solution after being trained on data, PIAI extends this by using physically consistent knowledge to generate scientifically correct representations. This allows the creation of AI models that can extend beyond the data on which they are trained, and are more robust to extreme disruption to business processes (such as the COVID-19 pandemic).
In Dublin, we developed physics-informed AI models to forecast ocean conditions such as wave height and water temperature. These lightweight AI models are combined with traditional physics models that run on supercomputers to generate more realistic forecasts that are more reliable across a wider range of scenarios. We extended this paradigm further when developing an air pollution forecasting model for Dublin city. In this study, individual models from different parts of the city were concatenated together by using machine learning and sophisticated domain decomposition techniques that enforce boundary conditions at interfaces of different domains. While this simplifies the training of machine learning models, it also allows an AI model's deployments to be extended beyond the domain(s) on which it was trained by introducing external boundary information.
News article and video: IBM's AI shrinks wave forecasting system to run on a Raspberry Pi
Publication: A machine learning framework to forecast wave conditions
Publication: Statistical and machine learning ensemble modelling to forecast sea surface temperature
Publication: Ensemble model aggregation using a computationally lightweight machine-learning model to forecast ocean waves
Publication: Using deep learning to extend the range of air pollution monitoring and forecasting

Data Assimilation

Data Assimilation (DA) is the backbone of modern cyber-physical systems, which links real world data with mathematical models. It improves the accuracy of forecasts provided by mathematical models and evaluates their reliability by optimally combining a priori knowledge, encoded in equations of mathematical physics, with a posteriori information in the form of sensor data. Mathematically, many DA methods rely upon results from stochastic and deterministic filtering theory and optimal control. For generic nonlinear models both optimal deterministic and stochastic filters are infinite-dimensional. Hence, if the state space of the model is of high dimension then both filters become computationally intractable due to the curse of dimensionality.
AI AI
Our team is working in close collaboration with academic partners on designing nonlinear approximations of the infinite-dimensional minimax filter (smoother, estimator) with the key property that the approximation is not affected by the curse of dimensionality, and that it is suitable for chaotic (e.g. turbulent) dynamical systems. These approximations are then used to design new global and scalable methods for system identification / inversion and for system behaviour prediction using the identified model. These methods are verified in close collaboration with IBM Research teams from Yorktown in the following practical use-cases:

  1. Digital surgery, namely tumour boundary delineation, details in this blog.
  2. Computational fluid dynamics, namely prediction of turbulent flows from incomplete low resolution observation (see attached State, Obs), details in this blog.
  3. Physically constrained design of deep ANNs with focus on super-resolution and prediction of spatio-temporal data.
  4. Systems biology, namely mathematical modelling of massive embeddings for brain phenotypes.

Partner Organizations

Explore

University Partners

  • Aberdeen University
  • Cambridge University
  • Imperial College London
  • Maynooth University
  • MIT
  • Royal College of Surgeons in Ireland (RCSI)
  • University College Dublin (UCD, including Mater University Hospital)
  • University College London (UCL)
  • UIUC
  • Utrecht University
  • Trinity College Dublin (TCD)

 

Other Partners

  • TYNDALL Research Centre
  • Rockley
  • EQUAL1
  • MasterCard
  • Deciphex

EU-Funded Consortia

Explore

Visiting IBM Research Europe in Dublin, Ireland

Address:

IBM Dublin Technology Campus, Building 3, Damastown Industrial Estate, Mulhuddart Dublin 15, Ireland


Directions to IBM Research Europe in Dublin, Ireland:

Take Exit 6 off M50 (signposted Cavan / Blanchardstown). Take Exit of N3 for Clonee / Damastown Industrial Estate. Take right at top of slip road and proceed straight for approx 500m. Upon entry to IBM Technology Campus, IBM Research Europe, Ireland, Building 3 will be signposted on the right hand side of the campus, just past the campus roundabout.


Upon Arrival to Building 3 Main Entrance:

Building 3 is the building with blue trim on windows. Park and enter by the main entrance. Press buzzer and ask the security officer to release the door for you. Check-in at our reception by advising the security guard who you are visiting and/or event you are attending and he will provide a visitor security badge. 


By public transport

Dublin Bus 38B serves the campus from Dublin City Centre. Also Dublin Bus 38D (which has a limited service) is non-stop after O’Connell Street to the IBM Campus.
There is an Expruss Bus Route 870, see schedule.

Discover what IBM is disrupting today