An integrated, computer vision-based system wasÂ developed to operate a commercial wheelchair-mounted roboticÂ manipulator (WMRM). In this paper, a gesture recognitionÂ interface system developed specifically for individuals withÂ upper-level spinal cord injuries (SCIs) was combined with objectÂ tracking and face recognition systems to be an efficient, hands free WMRM controller. In this test system, two Kinect camerasÂ wereÂ used synergistically to perform a variety of simple objectÂ retrieval tasks. One camera was used to interpret the handÂ gestures to send as commands to control the WMRM and locateÂ the operatorâ€™s face for object positioning. The other sensor wasÂ used to automatically recognize different daily living objects forÂ test subjects to select. The gesture recognitionÂ interfaceÂ incorporated hand detection, tracking and recognitionÂ algorithms to obtain a high recognition accuracy of 97.5% for anÂ eight-gesture lexicon. An object recognition module employingÂ Speeded Up Robust Features (SURF) algorithm was performedÂ and recognition results were sent as a command for â€œcoarseÂ positioningâ€ of the robotic arm near the selected daily livingÂ object. Automatic face detection was also provided as a shortcutÂ for the subjects to position the objects to the face by using aÂ WMRM. Completion time tasks were conducted to compareÂ manual (gestures only) and semi manual (gestures, automaticÂ face detection and object recognition) WMRM control modes.Â The use of automatic face and object detection significantlyÂ increased the completion times for retrieving a variety of dailyÂ living objects.
The architecture of the proposed system is illustrated in Figure 1. Two KinectÂ® video cameras were employed and served as inputs for the gesture recognition and object detectionÂ modules respectively. The results of these two modules were then passed as commands to the execution modules to control the JACO robotic arm (Kinova, Inc., MontrÃ©al, Canada). Briefly, these modules are described as follows:
A. Gesture Recognition Module
The video input from Kinect camera was processed in four stages using for gesture recognition based WMRM system control; foreground segmentation, hand detection, tracking, and hand trajectory recognition stage. Foreground segmentation was used to increase computational efficiency by reducing search range for hand detection and later stage process. The face and hands were detected from the foreground which provided an initialization region for hand tracking stage. The tracked trajectories were then segmented and compared to the pre-constructed motion models and classified them as certain gesture groups. The recognized gesture was then encoded and passed as command to control the WMRM.
B. Object Recognition Module
The goal of the object recognition module is to detect theÂ different daily living objects and assign a unique identifier for each of these objects. A template was created for each object being recognized. These templates were compared to each frame in the video sequence to obtain the best matching object. The results were then encoded and passed as commands to position the robotic manipulator.
C. Automatic Face Detection Module
A face detector was employed in this module to perform automatic face detection. The goal was to provide a shortcut for the subjects to position the objects to the front of the face by controlling the robotic arm.
D. Execution Module
The robotic arm was programmed as a wrapper using JACO API under C# environment which was then called by the main program. The JACO robotic arm was mounted to the seat frame of a motorized wheelchair. The robotic arm wascontrolled by the encoded commands from gesture recognition,Â automatic face detection and object recognition module.
Fig. 1. System Architecture