Projects of TEJIMA Lab.
Main targets of Tejima laboratory are a basic study of assistive
technologies and a development of assistive devices for the aged
and the challeged. This laboratory started on the April of 1996.
Our main projects are as follows:
Efficient control method by voice operation
Two dimentional position control by voice instructions was experimentally carried out for an effective voice operation for rehabilitation robots and wheelchairs. Ambiguity, delay of voice instructions and speed of movement are discussed.
This project has been performed in collaboration with the National Rehabilitation Center for the Disabled.
- N. Tejima and S. Uchiyama: Effective Control Method for a Voice-Controlled Wheelchair, Med & Biol. Eng. Comput. 37(suppl.), pp.1378-1379 (1999)
- N. Tejima: Factors Influencing Drivability for a Voice-Controlled Wheelchair, World Congress on Medical Physics and Biomedical Engineering, FR-Ba204-03, (2000)
New earring type input device for quadriplegics
A new earring type input device has been developing that enables quadriplegics to operate computers, wheelchairs or robots with great ease. The earring is used enable a sensor to detect the motions of the head.
Various systems have been developed which use ultrasonic waves or magnetic field to measure the head's position, but existing systems are unattractive to users and are cumbersome to put on and take off. They are connected with their controllers or the power sources by electric cables, and when they don't use them users don't want to wear them beacuse of their strange forms. They may wear the earring type input device even when they don't use it if it is smart and cute.
An earring consists of an ultrasonic transmitter, a coil and an amplifier, and works without electric cables, which has been fabricated into a compact unit with a size of 8.5mm. The distances between the earring and three ultrasonic receivers which are mounted on the wheelchair backboard can be measured by the electromagnetic coupling and the delay of ultrasonic waves. The position of the earring can be calculated by three distance data, and the head position and posture are calculated by the positions of earrings on both ears. The plan is to make the device much more compact and to study the hair ornament type devices other than the earring type.
This project has been performed in collaboration with the National Rehabilitation Center for the Disabled and it is supported by the fund of the Japan Association for Technical aids .
Voice to text conversion system for the people with hearing
Hearing impaired people have some difficulty acquiring information from spoken language. In Japan, communication with a hearing impaired audience is mainly done by translation into sign language and/or by overhead projector. Although sign language is the most important method of communication, complaints that it is not easy to learn sign language are sometimes made, especially by people who were deafened during adulthood or with hard of hearing. In an effort to achieve better communication, we were requested to develop a device for translating spoken Japanese into written Japanese which could be displayed to the audience.
The central unit of the STENOPCON system is a commercial personal computer used for signal processing. A specia typewriter was developed for this purpose and stenographic codes are fed into the computer from the typewriter. It is processed and converted into written form and then displayed to the audience.
It is necessary to mention relevant problems with Japanese written language, before a detailed introduction to STENOPCON can be made. Written Japanese consists of about 100 phonographic kana and more than 3000 ideographic kanji. Kana has two sets of about 50 letters; hiragana and katakana. Hiragana is normally used for written texts, while katakana is used to express foreign words and names. Textswithout kanji is difficult to be understood, mainly because spoken Japanese has too many homonyms. Therefore, real time conversion to written Japanese with correct selection of kanji remains a difficult task.
In STENOPCON all speech is processed by a stenographer into stenographic codes using a stenographic typewriter. From among many Japanese stenographies, we have adopted the Soku-type system , because the others are hand written stenographies. This was the only way to feed a real time signal of spoken Japanese into a computer when our development started. The Soku-type system uses a special mechanical typewriter and we considered it easy to convert a mechanical signal into an electronic signal for computer input of the stenographic code.
In the processing of the stenographic codes by personal computer about 70 rules and 10 code dictionaries are included in the stenographic software, with around 3500 words classified and registered in the dictionaries. Many terminologies relating to disability, especially hearing impairment are registered therein.
Because conversion to kanji by the conversion and selection method reduces both speed and accuracy, we have included much of the kanji and compound word data with few homonyms in the STENOPCON dictionaries. Although the proportion of the kanji that are represented in this way is estimated about quarter, the displayed text is sufficiently readable. Moreover, a lot of foreign words used in Japanese are registered by Katakana where there is no homonym.
Special terminologies may apply depending upon the organizer and the purpose of the meeting, and thus Soku-type stenographers often speed up their input operation by allocating some temporary stenographic codes to such terminologies, which the STENOPCON system should be registered beforehand. The stenographic dictionary utility software was programmed, in order to simplify the registration of the terms. Any special terminologies can be easily registered by checking duplication with relevant stenographic codes at the meeting.
If mistakes are included in the displayed text, the readers will find it difficult to understand. The mistakes are inevitable when utilizing the system, hence we have adopted a correction function which is operated by operator other than the stenographer. The sentences are converted by the personal computer and displayed on the editing screen. They are edited by the operator using a standard keyboard and the result is sent to the display screen via an output command issued by the operator.
The STENOPCON system has been used experimentally in some international, national and small meetings held for hearing impaired audiences: totally more than 200 times. Any stenographer utilizing the Soku-type system can operate STENOPCON easily with little training. Even very fast speakers can be followed and the speech is converted and displayed by the system. There is a time delay between input of the speech and display of the text because of the editing operation, however this does not severely hamper understanding of the content.
STENOPCON is in fact very effective when used as an aid to sign language interpretation. A great deal of attention is necessary for understanding sign language and STENOPCON helps when members of the audience miss some of the sign language. The STENOPCON information remains on the screen allowing the audience to easily catch up with the sign language.
We have collected audience evaluations of STENOPCON. As a result we are now confident that this system significantly aids hearing impaired peoples understanding of meetings, particularly if they do not understand sign language. There are great expectations for it to replace the need for hand-writing on an overhead projector in the future.
It used to be impossible to convert from spoken Japanese to writte Japanese in real time. From this point of view, the STENOPCON system is useful not only for meetings where handicapped people are present but also for office use. Stenographers who have used STENOPCON even want to use it in their daily work. If it starts to be used in the office, its cost can be reduced.
This project has been performed in collaboration with the National Rehabilitation Center for the Disabled and supported by STAC fund of the Ministory of Health and Welfare, Japan.
Algorithm for eating noodle by a rehabilitation robot
Noodles are one of the most difficult foods for a robot to handle due to their shape, flexibility and tendency to intertwine. A general method of a robotic aid enabling a quadriplegic to eat Japanese noodles in broth has been investigated. A commercially available 5 DOF robot with no vision sensor is used to eat noodles with a fork. The users operate it with the following commands: "take the noodles out of the bowl"; "feed me the noodles"; and "stir the noodles in the bowl." When users give the command to take the noodles out of the bowl, they select one out of three given trajectories: passing the top ofthe bowl; passing the middle of the bowl; and passing the bottom of the bowl. When noodles can be taken from the bowl, the command for feeding is given to eat them. When noodles cannot be taken because they are outside the given trajectories, the command for stir is given to change their position. Totally five kinds of commands are available. A cone-shaped bowl that is not fixed on the table is used because noodles can be handled more easily when centered on the bottom even if only a few remain.
A subject without disabilities has experimentally eaten a bowl of Japanese udon (thick) and soba (thin) noodles aided by the robot within 10 minutes. In this experiment, commands were given by a keyboard. The subjective estimation of the volume of noodles consumed is given in Table. If the subject judged visually that there were too many noodles on the fork, they were returned to the bowl and taken out again in a smaller quantity. While the too many noodles had been tried to reduce to an appropriate amount by shaking and rolling, it had resulted failure. Even when few noodles were taken on the fork, they were eaten. When a few remain in the bowl, the robot often failed to take noodles. From 40 to 70 commands for operation were given per bowl. The robot was unable to take from 2 to 10 pieces of short noodles, and from one to five noodles were spilled per bowl. A 4 DOF robot was enough for eating noodles: 3 DOF for positioning and one for posturing for dipping up. More research on the general algorithm of the robotic aid for eating noodles will be carried out in future.
Table Noodles handled by a robot
Too much 3%
Appropriate or less 76%
Basic study of a safe rehabilitation robot
Many people with disabilities use rehabilitation robots for assisting with the practical activities of their daily living, such as feeding themselves, shaving, brushing their teeth, handling light goods and scratching. The MANUS is one of the most well-known and widely used rehabilitation robots for such multipurpose applications, and it has assisted many people with disabilities in performing such tasks for several years. It was reported that many MANUS users thought that the MANUS was useful, but not useful enough to gain totally independent lifestyles for several reasons; it moves too slowly, it cannot handle heavy goods, and its manipulative arm is not long enough. It would be easy to simply develop a faster, more powerful and bigger robot than the MANUS to meet these requests, however, such a high performance robot would be too dangerous for everyday use. The goal of our project is to establish a safety strategy for designing rehabilitation robots.
The underlying mechanisms behind surface injuries are discussed. It is theoretically clarified that the restriction of the robot's velocity is the most effective method for preventing surface soft tissue trauma. If restricting the robot's velocity results in too great a loss in efficiency, then the magnitude of the robot's force should be restricted instead. We developed a new mechanism for accomplishing this force limitation and its static and dynamic features are experimentally evaluated.
- N. Tejima: Force Limitation with Automatic Return Mechanism for Risk Reduction of Rehabilitation Robots, Proc. ICORR'99, pp.74-78 (1999)
- N. Tejima: A New Force Limitation Mechanism for Risk Reduction in Human/Robot Contact, Proc. ICMA2000, pp.495-500(2000)
- N. Tejima: Design Method for Safe Rehabilitation Robots; Proc. ICORR2001: Integration of Assistive Technology in the Information Age, IOS Press, Amsterdam, pp.347-351(2001)
Psychological effects of a rehabilitation robot
Users of rehabilitation robot system such as feeding robots with severe disabilities are close to the robot arm which can be dangerous. It is not only danger mechanically but it is also uncomfortable and psychologically dreadful. We have performed an experiment to assess the psychological effect of the robot hand in speed and in the distance to the human face. A robot arm with a metalic hand turned 180 degrees horizontally in front of the subject's face with a constant speed chosen from five settings at random and with a constant arm length from four settings. The experiment was designed under two circumstances; i) the subjects watching the robot hand movement and ii) the subjects watching a ball fixed on the robot base. Six subjects without disabilities including two mechanical engineers and a housewife who was unfriendly to robots were asked how uneasy they felt about the robot movement. The close movement of the robot hand to their faces gave them more fear than the fast movement.
Encouraging system for the aged
As locomotive capability is one of the most fundamental functional requirements for living independently, gait training physical therapy is especially important for elderly patients. However, these patients are often apathetic with regard to their future and lack hope in their prospects for recovery. For example, due to its monotonous nature, elderly patients rarely show any enthusiasm for undergoing parallel bar assisted gait training. Our final goal is to develop a gait training system that encourages the elderly patients with audio stimulation, such as the cheery voice of a patient's grandchild, the scolding of a therapist and the cheerful music. We experimentally analyzed the verbal communication of physical therapists, music stimulation and the responses of the elderly confirmed the effectiveness of verbal and musical encouragements.
This project is supported by the STAC fund of the Japan Ministry of Health, Labour and Welfare.
- N. Tejima, Y. Takahashi and H. Bunki: Verbal-Encouragement Algorithm in Gait Training for the Elderly, Proc. 20th Annual Int. Conf. IEEE EMBS, pp.2724-2725 (1998)
- N. Tejima and H.Bunki: Effect of Music on Gait Training for the Elderly, Proc. 10th ICBME, p.612(2000)
Amusement for the aged
The final goal of the project is to make an amusement center or a theme park for the elderly. The elderly have experimentally played with several type of games, puzzles and sports, and they were recorded with video. We are now trying to extract the characteristics of amusements for the elderly.
Please mail to firstname.lastname@example.org.
Last modified: Mon July 16 2001