Please let us know if you have any further question:

Wheelie is the first computer program capable of extracting user's facial expressions to control a wheelchair without the need to place sensors on the user's body and capable of providing great efficiency allowing the user to use the wheelchair on a daily basis.

Wheelie utilizes a laptop and Intel’s RealSense facial-recognition camera to capture and decipher nearly 80 points from a person’s face. Then the developed software was programed to recognize facial movements such as a full smile, half smile, wrinkled nose, kissy face, tongue out or puffed-out cheeks and then assign those actions to driving the wheelchair forward, backward, turning left or right, or stopping it.

Yes. People have different needs and abilities. The goal is finding facial cues that are comfortable for, say, stroke patients to perform. Most people prefer to use the kissy face to drive forward, a half smile to turn right, tongue out to drive backward, a wrinkled nose to turn left, and a full smile to stop the wheelchair.

We have been working hard on it. Currently our classifier allows the user to have moderate conversation while driving. The system understands that the user is talking so the expressions will not interfere. Also, you can always disable and enable the interface by voice command.

No. If you already have a motorized wheelchair with a joystick, you can use our adaptor called HOOBOX Gimme. The adaptor will make your wheelchair communicates with HOOBOX Wheelie via USB port (first version) and WiFi (next generation). Gimme is not available yet.

No. You can choose to run Wheelie on a laptop, embedded computer, such as an Intel® NUC, and tablets with a USB 3.0 port, such as Microsoft Surface. You must have an Intel® RealSense Camera as well.

In the first generation, only facial expressions will be available. The Wheelie second generation prototype is already able to recognize voice commands, head and iris movement.

  1. Demand: There is a group of people who can not handle a joystick because they lost partially or fully the control of the hands. Performing facial expressions is one of the latest skills humans can lose. Of course there is a subgroup made up of people who can not perform facial expressions for whom we are also creating innovative solutions.
  2. Natural ability: The human being is great on doing facial expressions. We have used them into our conversations, whether spoken or text using emojis. We performed facial expressions since childhood. A face can provide various expressions and this is great to control and to drive things.

Imagine a person who had suffered an accident and now there is no movement on arms neither on hands. How s/he could control a wheelchair?

Our research group has investigated a wide number of mechanisms for this purpose and the first problem came up: most of these interfaces required the user to place a sensor in him/her body.

Sensors placed on the user's face to capture the contraction and relaxation of the facial muscles, goggles with cameras to capture the iris movement, air tubes to calculate how much the user blew through the mouth or exhaled through the nose and even sensors placed on the head to capture brain signals. Other mechanisms requiring no sensors on the body had a very low efficiency to be applied in day to day life.

We have compared technologies developed 10 years ago to technologies we have today to improve mobility. We have noticed that the improvements were not so significant. We accept the challenge of using our knowledge in assistive robotics to create the next generation of interfaces to drive wheelchairs. These interfaces should not require sensors placed on the user's body but should have a high efficiency for its application in day to day life.

People suffer from conditions that limit the use of their hands and arms, such as cerebral palsy or results of a stroke. We believe that more than 1.3 million people worldwide could benefit from this technology. We are talking about a group of people to whom the options commercially available are still limited.

The startup recently received its first investment. In this first stage we are focused on delivering a first prototype in the end of 2016. We expected that the product will be available in the first half of 2019.

The HOOBOX is a startup that was founded after the postdoctoral project of Dr. Paulo Gurgel Pinheiro at Brazil’s School of Electrical and Computer Engineering, State University of Campinas (FEEC / Unicamp). The focus was to study and develop innovative solutions to control a wheelchair.

All solutions in one way or another demanded the user to place sensors on his/her body. Solutions that do not require that, have a very low efficiency not being practical in everyday life.

We listen to the people and we found that they wanted a natural, comfortable and reliable interface. Then we decided to found a startup to obtain investments and partners to transform the prototype into a real product.

The HOOBOX was born with the same purpose of most startups: developing and bringing out to the people an innovative solution to solve a problem. The team consists of engineers, scientists and physiotherapists passionate about innovation.

We believe we can take advantage of the best ability of a person to counterpoint to his/her deficiency, not only to improve the mobility and autonomy, but also to enhance the self-esteem.

Feel free to pick up any of these three: HOOBOX, HOO.BOX or HOO-BOX. Our SEO manager would prefer "HOOBOX Robotics". But it's totally fine using the others.

Currently HOOBOX has the backing of FAPESP, Unicamp (State University of Campinas), and Intel® through the Intel® Software Innovator program.

The HOOBOX is always looking for new partners: industry, companies, research centers, government agencies, rehabilitation centers, hospitals, press, researchers, investors and people interested in the project. If you are interested in, do not hesitate to email us: