Creating tactile 3D displays with machine learning
“The idea of a fully-programmable tactile surface has been proposed as far back as the 1960s,” said Jue Wang, a Ph.D. student under Alex Chortos, assistant professor of mechanical engineering. “The technology is now getting good enough to make it a reality, but there are still many problems to solve.”
Consider Braille terminals, which electromechanically raise or lower dots to allow blind users to read words. The dots have a discrete state (either on or off), and the raising or lowering of one specific dot doesn’t affect any of the other dots. This is the equivalent of a 2D black-and-white dot-matrix printer. What if you wanted to represent gentle gradients in a 3D tactile space, the way you currently can in modern 2D displays and printers? You would need either a lot more pixels (which creates much more complex actuation), or a surface that is continuously variable.
That’s the problem being tackled by Wang and Chortos. They have just published research in IEEE Robotics and Automation Letters, detailing a new control strategy for continuous programmable surfaces that utilizes machine learning.
“Imagine one of those Braille dots comes up underneath a flexible surface,” said Chortos. “You would get gentle gradients on the surface, but you’d also introduce a complex combination of forces that make it difficult to precisely control the actuators underneath. Every dot’s movement would affect every other dot.”
The traditional way to calculate these forces is through finite element analysis, which uses computer simulation to create a 3D model. But this is time-consuming, and every pixel you add makes the computing time exponentially longer – too long to be practical for a real-time device. Chortos and Wang found a better way.
They used machine learning to train a computer to predict what the force values should be. Using random patterns on a 9x9 grid, they fed known examples of their finite element analysis into a machine learning regression model, and then used the resulting algorithm to predict what the surface deformation would be on other shapes. In just milliseconds, the machine learning models produced the same accurate results as finite element analysis, but 15,000 times faster.
More importantly, the model also successfully predicted the surface deformation when a specific shape was supplied – in this case, the letters in the word PURDUE. “This is the practical use-case,” said Chortos. “We want a device that will reproduce a specific pattern provided by a computer, and this control system is fast enough and smart enough to know exactly how the actuators will affect each other, and thus how to accurately reproduce the shape in 3D.”
Wang undertook this project remotely out of necessity, when the 2020 lockdown prevented him from travelling to the United States. Now that he’s here on Purdue’s campus, his next step is physically building a 9x9 device to experimentally verify their findings. “Doing robotics work remotely is never ideal,” said Wang. “But it actually gave us a chance to really focus on this control system. Now we’ve established a strong foundation that will help us build the physical prototype.”
Writer: Jared Pike, jaredpike@purdue.edu, 765-496-0374
Source: Alex Chortos, achortos@purdue.edu
Design of Fully Controllable and Continuous Programmable Surface Based on Machine Learning
Jue Wang, Jiaqi Suo, and Alex Chortos
https://doi.org/10.1109/LRA.2021.3129542
ABSTRACT: Programmable surfaces (PSs) consist of a 2D array of actuators that can deform in the third dimension, providing the ability to create continuous 3D profiles. Discrete PSs can be realized using an array of independent solid linear actuators. Continuous PSs consist of actuators that are mechanically coupled, providing deformation states that are more similar to real surfaces with reduced complexity of the control electronics. However, continuous PSs have been limited in size by the lack of the control systems required to take into account the complex internal coupling between actuators in the array. In this work, we computationally explore the deformation of a fully continuous PS with 81 independent actuation pixels based on ionic bending actuator. We establish a control strategy using machine learning (ML) regression models. Both forward and inverse control are achieved based on the training datasets which are derived from the finite element analysis (FEA) data of our PS. The prediction of surface deformation achieved by forward control with accuracy under 1% is 15000 times faster than FEM. And the real-time inverse control of continuous PSs that is to reproduce any arbitrary pre-defined surfaces, which possess high practical value for tactile display or human-machine interactive devices, is first proposed in the letter.