<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Intelligent Systems and Assistive Technologies Lab &#187; Research</title>
	<atom:link href="https://Engineering.Purdue.Edu/isat/category/research/feed/" rel="self" type="application/rss+xml" />
	<link>https://Engineering.Purdue.Edu/isat</link>
	<description>Bridging the gap between humans and robots</description>
	<lastBuildDate>Fri, 07 Feb 2020 17:16:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.1.42</generator>
	<item>
		<title>Telerobotic Surgery with Free Hand Gestures</title>
		<link>https://Engineering.Purdue.Edu/isat/telerobotic-surgery-with-free-hand-gestures/</link>
		<comments>https://Engineering.Purdue.Edu/isat/telerobotic-surgery-with-free-hand-gestures/#comments</comments>
		<pubDate>Thu, 26 Apr 2018 19:53:54 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=291</guid>
		<description><![CDATA[Description:  Current teleoperated robot-assisted surgery requires surgeons to manipulate joystick-like controllers in a master console, and robotic arms will mimic those motions on the patient&#8217;s side. It is becoming more popular compared to traditional minimally invasive surgery due to its dexterity, precision and accurate motion planning capabilities. However, one major…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/telerobotic-surgery-with-free-hand-gestures/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;"><span style="color: #000000;"><strong><span style="font-size: x-large;">Description: </span></strong><br />
Current teleoperated robot-assisted surgery requires surgeons to manipulate joystick-like controllers in a master console, and robotic arms will mimic those motions on the patient&#8217;s side. It is becoming more popular compared to traditional minimally invasive surgery due to its dexterity, precision and accurate motion planning capabilities. However, one major drawback of such system is related with user experience, since the surgeon has to retrain extensively in order to learn how to operate cumbersome interfaces.</span></p>
<p style="text-align: justify;"><span style="color: #000000;">To address this problem, we have developed an innovative system to involve touchless interfaces for telesurgery. This type of solution, when applied to robotic surgery, has the potential to allow surgeons to operate as if they were physically engaged with the surgery in-situ (as standard in traditional surgery). By relying on touchless interfaces, the system can incorporate more natural gestures that are similar to instinctive movements performed by surgeons when operating, thus enhancing the user experience and overall system performance. Sensory substitution methods are used as well to deliver force feedback to the user during teleoperation.</span></p>
<p><span style="color: #000000;"><strong><span style="font-size: x-large;">Publications:</span></strong></span><br />
<span style="color: #000000;">Zhou, Tian, Cabrera, Maria Eugenia, Low, Thomas, Sundaram, Chandru &amp; Wachs, Juan (2016). </span><a href="http://humanrobotinteraction.org/journal/index.php/HRI/article/view/269" target="_blank">A Comparative Study for Telerobotic Surgery Using Free Hand Gestures</a><span style="color: #000000;">. <em>Journal of Human-Robot Interaction, 5</em>, 1-28.</span></p>
<p><span style="color: #000000;">Zhou, Tian<strong>.</strong>, Cabrera, Maria., &amp; Wachs, Juan. (2016).</span> <a href="http://link.springer.com/chapter/10.1007%2F978-3-319-12943-3_17" target="_blank">A Comparative Study for Touchless Telerobotic Surgery</a>. <span style="color: #000000;">In <em>Computer-Assisted Musculoskeletal Surgery</em> (pp. 235-255). Springer International Publishing.</span></p>
<p><span style="color: #000000;">Zhou, Tian., Cabrera, Maria., &amp; Wachs, Juan (2015, January). </span><a href="https://pdfs.semanticscholar.org/61e4/4410b0ddd5a72001b6ed34f941ae963ab73a.pdf" target="_blank">Touchless telerobotic surgery-is it possible at all?</a><span style="color: #000000;">. In <em>AAAI</em> (pp. 4228-4230).</span></p>
<p><span style="color: #000000;">Zhou, Tian., Cabrera, Maria., &amp; Wachs, Juan, (2015) <a href="https://www.dropbox.com/s/jgqmg385es740xi/Book%20of%20abstracts%20online.pdf?dl=0">Communication Modalities for Supervised Teleoperation in Highly Dexterous Tasks &#8211; Does one size fit all?</a>.<span style="color: #000000;"> In <em>2nd Workshop on the role of Human Sensorimotor Control in Surgical Robotics, in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on.</em> IEEE<em>.</em></span></span></p>
<p><span style="color: #000000;">Zhou, Tian., Cabrera, Maria., &amp; Wachs, Juan, (2014)</span> <a href="http://inrol.snu.ac.kr/Telerobotics-CS.pdf">Touchless telerobotic surgery &#8211; A comparative study</a>.<span style="color: #000000;"> In <em>3rd Workshop on Telerobotics for Real-Life Applications, Opportunities, Challenges and New Developments, in </em><em>Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on.</em> IEEE<em>.</em></span></p>
<p><span style="color: #000000;"><strong><span style="font-size: x-large;">Videos:</span></strong></span></p>
<h2 class="wsite-content-title"><span style="color: #000000; font-size: large;">Incision task with Omega sensor</span></h2>
<p><iframe src="//www.youtube.com/embed/nu2WO7mZEgw" width="425" height="350"></iframe></p>
<h2 class="wsite-content-title"><span style="color: #000000; font-size: large;">Peg transfer with Leap Motion</span></h2>
<p><iframe src="//www.youtube.com/embed/NZseYREHI0U" width="425" height="350"></iframe></p>
<h2 class="wsite-content-title"><span style="color: #000000; font-size: large;">Threading task with Leap Motion</span></h2>
<p><iframe src="//www.youtube.com/embed/acDRttXBSEk" width="425" height="350"></iframe></p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/telerobotic-surgery-with-free-hand-gestures/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Surgical Instrument Recognition via Vision and Manipulation</title>
		<link>https://Engineering.Purdue.Edu/isat/surgical-instrument-recognition-via-vision-and-manipulation/</link>
		<comments>https://Engineering.Purdue.Edu/isat/surgical-instrument-recognition-via-vision-and-manipulation/#comments</comments>
		<pubDate>Thu, 26 Apr 2018 19:49:00 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=285</guid>
		<description><![CDATA[Description:  US hospitals are facing great shortage of registered nurses, which could lead to an increment in mortality rate. One solution to this challenge is to bring Robotic Scrub Nurse (RSN) into the Operating Room (OR) to free nurses from mundane and repetitive tasks such as instrument delivery and retrieval.…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/surgical-instrument-recognition-via-vision-and-manipulation/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><span style="color: #000000;"><strong><span style="font-size: x-large;">Description: </span></strong></span></p>
<p style="text-align: justify;"><span style="font-size: medium; color: #000000;">US hospitals are facing great shortage of registered nurses, which could lead to an increment in mortality rate. One solution to this challenge is to bring Robotic Scrub Nurse (RSN) into the Operating Room (OR) to free nurses from mundane and repetitive tasks such as instrument delivery and retrieval.</span></p>
<p style="text-align: justify;"><span style="color: #000000;">As an important building block for RSN, this paper presents an accurate and robust surgical instrument recognition algorithm. Surgical instruments are often cluttered, occluded and display specular light, which causes a challenge for conventional recognition algorithms. A learning-through-interaction paradigm was proposed to tackle the challenge, which combines computer vision with robot manipulation and achieves active recognition. The unknown instrument is firstly segmented out as blobs and its poses estimated, then the RSN system picks it up and presents it to an optical sensor in a determined pose. </span>Lastly<span style="color: #000000;"> the unknown instrument is recognized with high confidence.</span></p>
<p style="text-align: justify;"><span style="color: #000000;">Experiments were then conducted to evaluate the performance of the proposed segmentation and recognition algorithms, respectively. It is found out that the proposed patch-based segmentation algorithm and attention-based recognition algorithm greatly outperform their benchmark comparisons, proving the applicability and effectiveness of </span>a RSN<span style="color: #000000;"> to perform accurate and robust surgical instrument recognition tasks.</span></p>
<p><span style="color: #000000;"><strong><span style="font-size: x-large;">Publications:</span></strong></span><br />
<span style="color: #000000;">Zhou, Tian., &amp; Wachs, Juan. </span><a href="https://drive.google.com/file/d/0B1tBbdlHcKS-RWhWX2gzNkQ0Q0k/view?usp=sharing" target="_blank">Finding a Needle in a Haystack: Recognizing Surgical Instruments through Vision and Manipulation</a><span style="color: #2a2a2a;">. <span style="color: #000000;">In </span></span><span style="color: #000000;"><em>SPIE/IS&amp;T Electronic Imaging, </em>no. 9, pp. 37–45, IS&amp;T,</span><span style="color: #2a2a2a;"><span style="color: #000000;"> 2017.</span> </span><em><span style="color: #ff0000;">Best Student Paper<br />
</span></em><br />
<span style="color: #000000;">Zhou, Tian, and Juan P. Wachs.</span> &#8220;<a href="https://www.sciencedirect.com/science/article/pii/S0921889016305310">Needle in a haystack: Interactive surgical instrument recognition through perception and manipulation</a>.&#8221; <span style="color: #000000;"><i>Robotics and Autonomous Systems</i> 97 (2017): 182-192.</span></p>
<p>&nbsp;</p>
<p><span style="color: #000000;"><strong><span style="font-size: x-large;">Videos:</span></strong></span></p>
<p><iframe src="//www.youtube.com/embed/FDCAuYoUP_s" width="555" height="456"></iframe></p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/surgical-instrument-recognition-via-vision-and-manipulation/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Collaborative Robots in Surgical Research: a Low-Cost Adaptation</title>
		<link>https://Engineering.Purdue.Edu/isat/collaborative-robots-in-surgical-research-a-low-cost-adaptation/</link>
		<comments>https://Engineering.Purdue.Edu/isat/collaborative-robots-in-surgical-research-a-low-cost-adaptation/#comments</comments>
		<pubDate>Tue, 06 Mar 2018 22:07:47 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=390</guid>
		<description><![CDATA[This work demonstrates the adaptation of an industrial robotic system to an affordable and accessible open platform for education and research through rapid prototyping techniques. The ABB YuMi collaborative robot is adapted using a low-cost 3D printed gripper extension for surgical tools. The robot is controlled using an intuitive virtual…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/collaborative-robots-in-surgical-research-a-low-cost-adaptation/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;"><span style="color: #000000;">This work demonstrates the adaptation of an industrial robotic system to an affordable and accessible open platform for education and research through rapid prototyping techniques. The ABB YuMi collaborative robot is adapted using a low-cost 3D printed gripper extension for surgical tools. The robot is controlled using an intuitive virtual reality teleoperation system using the HTC VIVE controllers.</span></p>
<p style="text-align: justify;"><span style="color: #000000;">The design and assessment of three surgical tools in two mock surgical procedures are showcased in this work. The surgical tasks involved tissue removal with the designed cutting tools, where their effectiveness and completion time are assessed. We conclude from these results, that the perpendicular scalpel tool is preferred for faster completion time, but the scissors are preferred for small tissue removal in terms of effectiveness.</span></p>
<p style="text-align: justify;"><span style="color: #000000;"><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/05/picture2-2.png"><img class="aligncenter size-medium wp-image-391" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/05/picture2-2-300x149.png" alt="picture2-2" width="300" height="149" /></a></span></p>
<p><strong><span style="color: #000000;">Publication:</span></strong></p>
<p style="text-align: justify;"><span style="color: #000000;">Sanchez-Tamayo, N., &amp; Wachs, J. P. (2018, March). </span><a title="Collaborative Robots in Surgical Research: a Low-Cost Adaptation" href="https://dl.acm.org/citation.cfm?id=3176978" target="_blank">Collaborative Robots in Surgical Research: a Low-Cost Adaptation</a>.<span style="color: #000000;"> In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 231-232). ACM.</span></p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/collaborative-robots-in-surgical-research-a-low-cost-adaptation/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Learning Gestures for the First Time: One-Shot Gesture Recognition</title>
		<link>https://Engineering.Purdue.Edu/isat/learning-gestures-for-the-first-time-one-shot-gesture-recognition/</link>
		<comments>https://Engineering.Purdue.Edu/isat/learning-gestures-for-the-first-time-one-shot-gesture-recognition/#comments</comments>
		<pubDate>Mon, 26 Feb 2018 23:19:55 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=315</guid>
		<description><![CDATA[Humans are able to understand meaning intuitively and generalize from a single observation, as opposed to machines which require several examples to learn and recognize a new physical expression. This trait is one of the main roadblocks in natural human-machine interaction.  Particularly, in the area of gestures which are an…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/learning-gestures-for-the-first-time-one-shot-gesture-recognition/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p class="font_9"><span class="color_11">Humans are able to understand meaning intuitively and generalize from a single observation, as opposed to machines which require several examples to learn and recognize a new physical expression. This trait is one of the main roadblocks in natural human-machine interaction.  Particularly, in the area of gestures which are an intrinsic part of human communication. In the aim of natural interaction with machines, a framework must be developed to include the adaptability humans portray to understand gestures from a single observation.</span></p>
<p class="font_9"><span class="color_11">This framework includes the human processes associated with gesture perception and production. From the single gesture example, key points in the hands&#8217; trajectories are extracted which have found to be correlated to spikes in visual and motor cortex activation. Those are also used to find inverse kinematic solutions to the human arm model, thus including the biomechanical and kinematic aspects of human production to artificially enlarge the number of gesture examples.</span></p>
<p class="font_9"><span class="color_11">Leveraging these artificial examples, traditional state-of-the-art classification algorithms can be trained and used to recognize future instances of the same gesture class.</span></p>
<p class="font_9"><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/04/overview3.png"><img class="aligncenter wp-image-316 size-large" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/04/overview3-1024x577.png" alt="overview3" width="900" height="507" /></a></p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/learning-gestures-for-the-first-time-one-shot-gesture-recognition/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Paper accepted at ICMI 2017</title>
		<link>https://Engineering.Purdue.Edu/isat/paper-accepted-at-icmi-2017/</link>
		<comments>https://Engineering.Purdue.Edu/isat/paper-accepted-at-icmi-2017/#comments</comments>
		<pubDate>Fri, 17 Nov 2017 21:21:55 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Conferences]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=421</guid>
		<description><![CDATA[A paper related to zero shot learning for gesture recognition was accepted for poster presentation at 19th ACM International Conference on Multimodal Interaction held at Glasgow, Scotland, UK. ISAT&#8217;s member, Mr. Naveen Madapana presented the poster at the conference. Brief Description: Humans tend to create the gestures on the fly and…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/paper-accepted-at-icmi-2017/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p>A <a href="https://dl.acm.org/citation.cfm?id=3136774">paper</a> related to zero shot learning for gesture recognition was accepted for poster presentation at 19th ACM International Conference on Multimodal Interaction held at Glasgow, Scotland, UK. ISAT&#8217;s member, Mr. Naveen Madapana presented the poster at the conference.</p>
<p>Brief Description: Humans tend to create the gestures on the fly and conventional machine learning systems lack adaptability to learn new gestures beyond the training stage. This problem can be best addressed using Zero Shot Learning (ZSL), a paradigm in machine learning that aims to recognize unseen objects by just having a description of them. ZSL for gestures has hardly been addressed in computer vision research due to the inherent ambiguity and the contextual dependency associated with the gestures. This work proposes an approach for Zero Shot Gestural Learning (ZSGL) by leveraging the semantic information that is embedded in the gestures.</p>
<p><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/06/IMG_7104.jpg"><img class="aligncenter wp-image-420 size-large" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2018/06/IMG_7104-853x1024.jpg" alt="IMG_7104" width="853" height="1024" /></a></p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/paper-accepted-at-icmi-2017/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>System for Telementoring with Augmented Reality Project</title>
		<link>https://Engineering.Purdue.Edu/isat/system-for-telementoring-with-augmented-reality-project/</link>
		<comments>https://Engineering.Purdue.Edu/isat/system-for-telementoring-with-augmented-reality-project/#comments</comments>
		<pubDate>Sun, 31 May 2015 21:27:13 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=219</guid>
		<description><![CDATA[The System for Telementoring with Augmented Reality (STAR) project is a multi-institution project sponsored by the Office of the Assistant Secretary of Defense for Health Affairs. Link to STAR project website. &#160; The primary objective of this project is to research, develop, and validate an augmented reality system that would improve the…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/system-for-telementoring-with-augmented-reality-project/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p><strong>The System for Telementoring with Augmented Reality (STAR) project is a <b>multi-institution project sponsored by</b> the Office of the Assistant Secretary of Defense for Health Affairs. <a href="https://engineering.purdue.edu/starproj/" target="_blank">Link to STAR project website</a>. </strong></p>
<p>&nbsp;</p>
<p>The primary objective of this project is to research, develop, and validate an augmented reality system that would improve the effectiveness of telementoring between surgeons. Telementoring involves procedural guidance of a trainee surgeon by an expert surgeon fromÂ afar using telecommunication.</p>
<p>&nbsp;</p>
<p>The STAR project is an innovative platform that relies on table and touchscreen displays, transparent screens, tablets, and color and depth sensors to increase the quality of the communication between mentor and trainee.</p>
<p>&nbsp;</p>
<p><strong>Institutions involved:</strong></p>
<ul>
<li>Purdue University, School of Industrial Engineering</li>
<li>Purdue University, School of Computer Science</li>
<li>Indiana University, School of Medicine</li>
</ul>
<p>&nbsp;</p>
<p><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/05/Ipad-Surgery.jpg"><img class="aligncenter wp-image-222 size-medium" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/05/Ipad-Surgery-300x225.jpg" alt="Ipad Surgery" width="300" height="225" /></a></p>
<p>With STAR, we want to increase the mentor and trainee sense of co-presence through an augmented visual channel that will lead to measurable improvements in the trainee&#8217;s surgical performance.</p>
<p>&nbsp;</p>
<p>Our project has four specific aims. First, we will research, develop, and assess a transparent-display augmented-reality system that allows the seamless augmentation of a trainee surgeon&#8217;s natural view of the surgical field with annotations and illustrations of the current and next steps of the surgical procedure. Second, we will research, develop, and assess a patient-size interaction platform where the mentor can mark, annotate, and zoom in on anatomic regions using gestures performed over a projected image or on a multi-point touch screen. Third, we will validate and refine the proposed STAR platform in the context of practice cricothyroidotomy procedures on a human-patient simulator in a controlled environment. Fourth, we will validate the refined STAR platform in a simulated austere environment. The environment will match, for example, a military echelon Role 2 medical facility, consistent with a forward surgical team with limited resources. A hemorrhage-control procedure will be done within a damage-control laparotomy on a porcine model.</p>
<p>&nbsp;</p>
<p>The STAR project has a military significance. There are three more areas where our proposed system has the potential to improve quantitative and qualitative outcomes in the military healthcare setting. First, telementoring can avoid or diminish the loss of surgical skills through retraining and competence. Second, it will be useful to provide instructional material in a simulation form, to support doctors serving in Iraq and Afghanistan in traumatic care in the battlefield, in a portable and dynamic style. Finally it will allow recent combat medic graduates to reinforce surgical techniques that were not conceptualized during their training curricula.</p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/system-for-telementoring-with-augmented-reality-project/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>3D Joystick for Robotic Arm Control by Individuals  with High Level Spinal Cord Injuries</title>
		<link>https://Engineering.Purdue.Edu/isat/3d-joystick-for-robotic-arm-control-by-individuals-with-high-level-spinal-cord-injuries/</link>
		<comments>https://Engineering.Purdue.Edu/isat/3d-joystick-for-robotic-arm-control-by-individuals-with-high-level-spinal-cord-injuries/#comments</comments>
		<pubDate>Fri, 16 Jan 2015 21:11:49 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=139</guid>
		<description><![CDATA[Abstract An innovative 3D joystick was developed to enable quadriplegics due to spinal cord injuries (SCIs) to more independently and efficiently operate a robotic arm as an assistive device. The 3D joystick was compared to two different manual input modalities, a keyboard control and a traditional joystick, in performing experimental…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/3d-joystick-for-robotic-arm-control-by-individuals-with-high-level-spinal-cord-injuries/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<h2><strong>Abstract</strong></h2>
<p style="text-align: justify;">An innovative 3D joystick was developed to enable quadriplegics due to spinal cord injuries (SCIs) to more independently and efficiently operate a robotic arm as an assistive device. The 3D joystick was compared to two different manual input modalities, a keyboard control and a traditional joystick, in performing experimental robotic arm tasks by both subjects without disabilities and those with upper extremity mobility impairments. Fittsâ€™s Law targeting and practical pouring tests were conducted to compare the performance andÂ accuracy of the proposed 3D joystick. The Fittsâ€™s law measurements showed that the 3D joystick had the best index of performance (IP), though it required an equivalent number ofÂ operations and errors as the standard robotic arm joystick. The pouring task demonstrated that the 3D joystick took significantly less task completion time and was more accurate than keyboard control. The 3D joystick also showed a decreased learning curve to the other modalities.</p>
<p><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/3D-Joystick.png"><img class="alignnone size-full wp-image-146" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/3D-Joystick.png" alt="3D Joystick" width="299" height="206" /></a></p>
<h2><strong>Methodology</strong></h2>
<p style="text-align: justify;">The multimodal robotic arm user control systems consisted of three parts: a PC workstation, the different controller types (default joystick, keyboard or 3D joystick), and the actuated robotic arm (JACO)Â TMÂ Robot Manipulator from Kinova Technology as shown in Fig. 1). The default controller for the JACO arm is a traditional joystick to control the movement of certain elements (i.e. arm, wrist) in two dimensions (see top of Fig. 2). Movement of the robotic arm in the 3rdÂ dimension requires rotation of the joystick knob. This motion is extremely difficult or even impossible for individuals to perform with complete high-level (Cervical levels 1-8) SCIs.</p>
<p style="text-align: justify;">
<p style="text-align: justify;">Fig. 1.Â  JACO robotic arm ready to grasp a water bottle. It can also be<br />
mounted to a wheelchair.</p>
<p style="text-align: justify;">Two alternative modalities were developed in this project to serve as superior user controllers for this robotic arm for quadriplegic users. The first alternative input methodÂ developed was through keyboard control (top of Fig. 2). Keyboards are widely used as a direct selection device for efficient and naturally intuitive operation. For keyboard operation, all the functions for robotic control were mapped to specific keystrokes (i.e up, down, left, right, forward, backward, change mode). Three keyboard input control modes were programmed: discrete, continuous and hybrid (a combination of discrete and continuous) modes. During discrete mode, the robotic arm moved in small increments<br />
every time a key was pressed. During continuous mode, theÂ arm would move continuously until stopped or another key to change directions was pressed. During hybrid mode, subjects could toggle between discrete and continuous modes at their discretion.</p>
<p style="text-align: justify;"><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/Picture4.png"><img class="alignnone  wp-image-151" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/Picture4.png" alt="Picture4" width="638" height="477" /></a></p>
<p style="text-align: justify;">Fig.2.Â Subject with a SCI using the 3D joystick to perform the pouring task.</p>
<p style="text-align: justify;">The other alternative control modality was a 3D joystick (Fig. 3) which was originally designed for haptic video game playing by Falcon TechnologyÂ®. It was reprogrammed andÂ adapted as a 3D joystick controller for the robotic arm. A handle developed for users with no finger gripping ability was positioned in the center of the joystick. The 3D joystick provides users a method of directed selection to control the robotic arm elements to move in 3D Euclidean space. The handle of 3D joystick was positioned at the center of the joystick as a home (or rest) position if not used by the user.</p>
<p style="text-align: justify;"><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/3D-Joystick.png"><img class="alignnone size-full wp-image-142" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/3D-Joystick.png" alt="3D Joystick" width="399" height="274" /></a></p>
<p style="text-align: justify;">Fig.3.Â Â 3D joystick with adapted handle for quadriplegic users.</p>
<p style="text-align: justify;">A force feedback control with a proportional and differential (PD) controller force the handle back to the center after each manipulation. The control diagram for 3D haptic joystick is shown in Fig. 4. A JACO API was used to fictionalize the haptic joystick to achieve 3D control of the robotic arm.</p>
<p style="text-align: justify;"><img class="alignnone size-full wp-image-153" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/Picture5.png" alt="Picture5" width="506" height="306" /></p>
<p style="text-align: justify;">Fig.4. Â 3D joystick control diagram.</p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/3d-joystick-for-robotic-arm-control-by-individuals-with-high-level-spinal-cord-injuries/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Integrated Vision-Based Robotic Arm Interface for  Operators with Upper Limb Mobility Impairments</title>
		<link>https://Engineering.Purdue.Edu/isat/integrated-vision-based-robotic-arm-interface-for-operators-with-upper-limb-mobility-impairments/</link>
		<comments>https://Engineering.Purdue.Edu/isat/integrated-vision-based-robotic-arm-interface-for-operators-with-upper-limb-mobility-impairments/#comments</comments>
		<pubDate>Fri, 16 Jan 2015 18:54:35 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=116</guid>
		<description><![CDATA[Abstract An integrated, computer vision-based system wasÂ developed to operate a commercial wheelchair-mounted roboticÂ manipulator (WMRM). In this paper, a gesture recognitionÂ interface system developed specifically for individuals withÂ upper-level spinal cord injuries (SCIs) was combined with objectÂ tracking and face recognition systems to be an efficient, hands free WMRM controller. In this test system,…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/integrated-vision-based-robotic-arm-interface-for-operators-with-upper-limb-mobility-impairments/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<h2 style="text-align: left;"><strong>Abstract</strong></h2>
<p style="text-align: justify;">An integrated, computer vision-based system wasÂ developed to operate a commercial wheelchair-mounted roboticÂ manipulator (WMRM). In this paper, a gesture recognitionÂ interface system developed specifically for individuals withÂ upper-level spinal cord injuries (SCIs) was combined with objectÂ tracking and face recognition systems to be an efficient, hands free WMRM controller. In this test system, two Kinect camerasÂ wereÂ used synergistically to perform a variety of simple objectÂ retrieval tasks. One camera was used to interpret the handÂ gestures to send as commands to control the WMRM and locateÂ the operatorâ€™s face for object positioning. The other sensor wasÂ used to automatically recognize different daily living objects forÂ test subjects to select. The gesture recognitionÂ interfaceÂ incorporated hand detection, tracking and recognitionÂ algorithms to obtain a high recognition accuracy of 97.5% for anÂ eight-gesture lexicon. An object recognition module employingÂ Speeded Up Robust Features (SURF) algorithm was performedÂ and recognition results were sent as a command for â€œcoarseÂ positioningâ€ of the robotic arm near the selected daily livingÂ object. Automatic face detection was also provided as a shortcutÂ for the subjects to position the objects to the face by using aÂ WMRM. Completion time tasks were conducted to compareÂ manual (gestures only) and semi manual (gestures, automaticÂ face detection and object recognition) WMRM control modes.Â The use of automatic face and object detection significantlyÂ increased the completion times for retrieving a variety of dailyÂ living objects.</p>
<p><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/img2.png"><img class="alignnone size-full wp-image-133" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/img2.png" alt="img2" width="306" height="230" /></a></p>
<h2 style="text-align: left;"><b>System Architecture</b></h2>
<p style="text-align: justify;">The architecture of the proposed system is illustrated in Figure 1. Two KinectÂ® video cameras were employed and served as inputs for the gesture recognition and object detectionÂ modules respectively. The results of these two modules were then passed as commands to the execution modules to control the JACO robotic arm (Kinova, Inc., MontrÃ©al, Canada). Briefly, these modules are described as follows:</p>
<h4 style="text-align: justify;"><em>A. Gesture Recognition Module</em></h4>
<p style="text-align: justify;">The video input from Kinect camera was processed in four stages using for gesture recognition based WMRM system control; foreground segmentation, hand detection, tracking, and hand trajectory recognition stage. Foreground segmentation was used to increase computational efficiency by reducing search range for hand detection and later stage process. The face and hands were detected from the foreground which provided an initialization region for hand tracking stage. The tracked trajectories were then segmented and compared to the pre-constructed motion models and classified them as certain gesture groups. The recognized gesture was then encoded and passed as command to control the WMRM.</p>
<h4><em>B. Object Recognition Module</em></h4>
<p>The goal of the object recognition module is to detect theÂ different daily living objects and assign a unique identifier for each of these objects. A template was created for each object being recognized. These templates were compared to each frame in the video sequence to obtain the best matching object. The results were then encoded and passed as commands to position the robotic manipulator.</p>
<h4><em>C. Automatic Face Detection Module</em></h4>
<p>A face detector was employed in this module to perform automatic face detection. The goal was to provide a shortcut for the subjects to position the objects to the front of the face by controlling the robotic arm.</p>
<h4 style="text-align: justify;"><em>D. Execution Module</em></h4>
<p style="text-align: justify;">The robotic arm was programmed as a wrapper using JACO API under C# environment which was then called by the main program. The JACO robotic arm was mounted to the seat frame of a motorized wheelchair. The robotic arm wascontrolled by the encoded commands from gesture recognition,Â automatic face detection and object recognition module.</p>
<p>Â <img class="  wp-image-121 aligncenter" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2015/01/Img.png" alt="Img" width="623" height="428" /></p>
<p>Fig. 1. System Architecture</p>
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/integrated-vision-based-robotic-arm-interface-for-operators-with-upper-limb-mobility-impairments/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Multimodal Image Perception System for Blind or Visually Impaired People</title>
		<link>https://Engineering.Purdue.Edu/isat/multimodal-image-perception-system-for-blind-or-visually-impaired-people/</link>
		<comments>https://Engineering.Purdue.Edu/isat/multimodal-image-perception-system-for-blind-or-visually-impaired-people/#comments</comments>
		<pubDate>Thu, 20 Nov 2014 20:08:17 +0000</pubDate>
		<dc:creator><![CDATA[ISAT]]></dc:creator>
				<category><![CDATA[Research]]></category>

		<guid isPermaLink="false">https://Engineering.Purdue.Edu/isat/?p=93</guid>
		<description><![CDATA[Currently there is no suitable substitute technology to enable blind or visually impaired (BVI) people to interpret visual scientific data commonly generated during lab experimentation in real time, such as performing light microscopy, spectrometry, and observing chemical reactions. This reliance upon visual interpretation of scientific data certainly impedes students and…<p class="continue-reading-button"> <a class="continue-reading-link" href="https://Engineering.Purdue.Edu/isat/multimodal-image-perception-system-for-blind-or-visually-impaired-people/">Continue reading<i class="crycon-right-dir"></i></a></p>]]></description>
				<content:encoded><![CDATA[<p class="p1"><span class="s1">Currently there is no suitable substitute technology to enable blind or visually impaired (BVI) people to interpret visual scientific data commonly generated during lab experimentation in real time, such as performing light microscopy, spectrometry, and observing chemical reactions. This reliance upon visual interpretation of scientific data certainly impedes students and scientists that are BVI from advancing in careers in medicine, biology, chemistry, and other scientific fields. To address this challenge, a real-time multimodal image perception system is developed to transform standard laboratory blood smear images for persons with BVI to perceive, employing a combination of auditory, haptic, and vibro-tactile feedbacks. These sensory feedbacks are used to convey visual information through alternative perceptual channels, thus creating a palette of multimodal, sensorial information.Â </span></p>
<p class="p1"><a href="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2014/11/Photo-Apr-15-2-39-19-PM-e1416354211392.jpg"><img class="alignnone wp-image-94 size-medium" src="https://Engineering.Purdue.Edu/isat/wp-content/uploads/2014/11/Photo-Apr-15-2-39-19-PM-e1416354211392-300x185.jpg" alt="" width="300" height="185" /></a></p>
<p class="p1"><span id="more-93"></span></p>
<h1 class="p1">I. Introduction</h1>
<p>From the 2011 National Health Interview Survey (NHIS) Preliminary Report, it is estimated that 21.2 million adult Americans, namely more than 10% of all adult Americans have trouble seeing. Among theÂ  6.6 million working-age adults with BVI, 64% did not finish high school and approximately only 6% earned a Bachelorâ€™s or higher degree [1]. The lack of proper and effective assistive technologies (AT) can be considered as a major roadblock for individuals that are BVI to actively participate in science and advanced research activities [2]. It is still a challenge for them to perceive and understand scientific visual data acquired during wet lab experimentation, such as viewing live specimens through a stereo microscope or histological samples through light microscopy (LM). According to Science and Engineering Indicator 2014 published by NSF, no more than 1% of blind or visually impaired people are involved in advanced science and engineering research and receive doctoral degrees [3].</p>
<p>By using current single-modality human-computer interfaces (HCI), only limited visual information can be accessed due to different limitations of each sense. Although, tactile-vision sensory substitution (TVSS) technologies, such as Tongue electrotactile array [4], and tactile pictures [5], have been demonstrated capable of conveying visual information [6] of spatial phenomenology [7], the low resolution of somatosensory display arrays have always been a limitation of these methods to convey complex image information. Auditory-vision sensory substitution has also been studied in image perception [8], [9]. Trained early blind participants showed increased performance in localization and object recognition [10] through this substitution. Auditory-vision substitution always involves the memorization of different audio forms and training is required to map from different audio stimulus to visual cues. In addition, the focus on auditory feedback can decrease subjectsâ€™ ability to get information from the environment [11]. The current gap is that existing solutions cannot help conveying the richness, complexity and amount of data available to users without disabilities. In this study, a real-time multimodal image perception approach is investigated that incorporates the feedback to multiple sensory channels, including auditory, haptics and vibrotactile. Through the integration of multiple sensorial substitutions, participants supported using the proposed platform showed higher analytic performance than when using the standard interface based on one sensory feedback only.</p>
<p>&nbsp;</p>
<p class="p1">
]]></content:encoded>
			<wfw:commentRss>https://Engineering.Purdue.Edu/isat/multimodal-image-perception-system-for-blind-or-visually-impaired-people/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
