PAL director Dr Fridolin Wild gave a keynote presentation at the European Space Agency on its ESTEC premises in Noordwijk near Amsterdam, at a workshop about ‘AR/VR for European Space Programmes’ on December 2. The workshop provided a platform for experts involved in the technology area of AR/VR, and Dr Wild presented about AR training for space maintenance, operations, and repairs, speaking about the much needed upgrade for training technology for space, reporting results from the ESA AR technology roadmap for product assurance and safety (ARPASS), the WEKIT project, and a bit of an outlook on our new ARETE project. Here are some of his notes from the workshop’s expert exchange, summarising the input of all the experts.
The space sector motivates and can help build the next generation immersive tech Advanced Human Machine Interaction, bringing together Artificial Intelligence (AI) and Machine Learning (ML), Augmented Reality (AR) and Virtual Reality (VR), and User Experience design and development (UX) in an explosive mix to: a) support crew training and live guidance, effectively to make astronauts more autonomous and less dependent on ground control, b) plan and facilitate planetary exploration missions, c) support construction, running, and maintenance of sustainable scientific labs in space.
Advanced Human Machine Interaction for space thereby requires inventing and maturing five new technologies. Firstly, outside-in (e.g., CubeSat) and inside-out (wearable) sensors with space-enabled internet of things capabilities provide real-time monitoring of and feedback on operations performance. Low hanging fruits here are using computer vision for deviation analysis in assembly, or integrating AR/VR e-textile interaction capabilities into the suits for IVAs/EVAs (e.g. forearm gestures, glove control, tracking with collider-based triggers), possibly extended by IoT communication to instruments, tools, or vehicles. Secondly, AR-executable activity descriptions of standard operations enable the creation of repositories for all eventualities required for the next moon landing (or ultimately the moon village) or other, later, deep space missions such as to Mars. Long-term sustainability through accepted standards will thereby be key. Thirdly, and connected to the previous, are means for semi-automated content creation using sensor-recordings/IoT, AI, and semantics. Tools for user-generated AR experiences turn everything into digital assets. Fourthly, XR-enabled human-robotic interfaces will enhance capabilities of human spaceflight (e.g. human operator and robotic arm collaborate in the same operation), as well as remote-operated, unmanned missions, such as using tele-robotics in installation or maintenance. Last, but not least, collaborative immersive experiences will connect person with person, vision with haptics, also possibly VR on earth with AR in space.
At the same time, the following limitations apply, some of which we can work towards actually overcoming (or at least remediating). The lack of digital twins, i.e. data not being available in digital formats, front loads digitisation costs at the moment, severely impeding on the ability to churn out content. Moreover, interoperability is not (yet) a given and legacy content poses compatibility issues with current generation devices. The content creation process needs to be redesigned, setting more focus on (sensor-informed) capture, using manual authoring rather as post-processing step than as starting point. Presentation to the user needs to be more hands-free. Delivery and interaction devices currently still suffer of limitations. For example, the field of view of current AR/VR smart glasses is still limiting human perception, processing power of wearable and mobile solutions is rather restricted, some hardware is not space qualified. Interaction technology that works on earth in normal circumstances is not necessarily appropriate for gloves, gestures in confined spaces, or where voice is already used elsewhere. Better space-suitable, usable UX is needed. Metric measurement accuracy on AR/VR glasses, for example, poses a challenge. High performance computing is too restricted both on ground as well as in space, and, particularly where transmission times for exchange (e.g. Earth to Mars) are no longer real-time or near-real-time (but solidly delayed), this becomes an issue.
AR/VR for space possesses the strength to shorten time in mission planning, yield superior performance through augmentation, provides context-aware information representation, with a hedonic quality that could keep deep space crew safe and sane, and with an excellent visualisation capability that enables cheap but realistic simulations. AR/VR has reached a level of maturity that enables practical demos and real applications, regardless of the prevalent limitations.
The space sector has the opportunity, through AR/VR, for enormous cost savings, improved learning, better training applications, to help develop better hardware. Through its largely publicly-funded nature of procurement, space has a strategic advantage over other industries to implement change top-down quickly through incentives in grant proposal calls and call for tenders, potentially impacting on manufacturing at large.
Weaknesses are the need to model an extra 3D environment (VR), hardware durability, not enough projects on higher TRL levels, the small ROI for space, hardware not being fit for space use, lacking methodology and lacking support of use cases. Threats are posed with regards to cyber security, the resistance to change by the sector, lack of proof of concepts for particular use cases, and the need for better visualisation. Time from development to use can be decades and that requires product life cycle thinking sometimes rather uncommon in other industry sectors. Most importantly, the lack of standardisation is the expressed biggest threat: ‘standards’ were named as the most important area for ESA to invest to leverage the use of AR/VR for space missions the biggest group (supported by 64% of all participants).
Splash photo: ESA.