What are we offering?
Visual operation guidance using augmented reality, in two modes of operations: Peer Guidance and Self-Guidance. In Peer Guidance mode, a remotely located expert instructs the field technician. In Self Guidance mode, the instructions are superimposed in 3D based on recognition and tracking of the object under maintenance. Content for the Self Guidance mode can be created automatically from Peer Guidance sessions.
How is this research project going to be offered to clients?
Any technician field app can be upgraded with a new "Launch Peer Guidance" button that shows the AR app in Peer Guidance mode. The app will display the technician's name, object under maintenance, and relevant experts. At the end of the guidance session, a visual summary is created and attached to the work order.
What operating system/devices will it support?
iOS, version 11.0 and above, iPhone 6s and above.
Does it support voice to text?
Yes, both the technician and the expert can use voice to annotate elements and send instructions.
- Train & Instruct - Enhanced and interactive field tech' training experience, capturing and replaying expert's knowledge
- Fix & Guide - Interactive, step by step scenes, guiding the field tech on how to fix the problem by augmenting the steps
- Improved Interaction - Visualize and augment real time IoT data on top of device to enhance the field tech interaction with the object / environment
Can I generate Self Guidance content without having to have a Peer Guidance session?
Yes. The 3D Modeling and Content Authoring is a web application running on IBM Cloud. It can be used to manually create content for Self Guidance, as an alternative to the automatic creation from Peer Guidance content.