Inspiration

Transport has been and always will be one of the greatest logistical challenges to our kind. The advent of self-driving cars and drones are certainly eye-catching advances in transportation technology, but we believe that this leap in transport technology has been largely ignorant to the actual needs of everyday people.

...but what are those needs?

It seems that hardly a day goes by without many of us wishing that we have a second set of hands to hold or carry something. We're not asking for anything crazy like a robotic arm or a fully humanoid robot that clunks around our homes - all we really want is something that stays out of one's way but is always ready to lend a helping hand, surface, or even a quick lift from point A to point B.

There is also a sizable demographic of people that have been left out of these recent transportation revolutions - the handicapped. Many disabled persons lack the physical ability to reliably guide themselves throughout their environment. Even with a caretaker, they require someone else's physical exertion to get them around their personal space. What if there was a transportation revolution that positively affected the nurses, patients, and elderly persons of our world?

Enter Perssistant, the persistent assistant.

What it does

Perssistant is a multi-function autonomous vehicle capable of serving humans in nearly every environment imaginable.

It has two core modes of operation - follow and manual. When operating in its follow setting, Perssistant utilizes its Xbox Kinect camera to analyze its environment and identify a human through shape recognition. It's pretty simple - when the human (the user) moves, so does Perssistant. When the human stops, so too does Perssistant at a safe yet within reach distance of 5 feet. The user need not worry about Perssistant colliding with anything in its environment thanks to our cautious obstacle avoidance system which ensures the vehicle will come to a complete stop well short of any obstructions. In contrast, the manual mode completely entrusts control of Perssistant to the user via an mobile app. This app provides both the interface to switch Perssistant's operating mode along with the interface to steer and maneuver the vehicle via an on-screen directional pad.

While Perssistant can move itself away in either one of these two control modes, we see it having nearly infinite applications for what it can actually move. The following are a small sampling of some of the applications we've envisioned -

  • Nurses in nursing homes or hospitals can effortlessly lead residents throughout the facility by merely walking in front of them
  • Mechanics are in constant need of accessing tools which can be scattered around the shop in many different cabinets and chests; with Perssistant, a mechanic could have all of his or her tools within arm's reach at any given time.
  • The men and women of our nation's armed forces are often tasked with lugging heavy equipment around all day long, but Perssistant could transfer these loads off their backs and on to its own.
  • Golfers at walk-only courses can use Perssistant as their own personal caddy on the links.
  • Grocery and big-box stores can use Perssistant for their re-stocking operations in order to ensure that their employees avoid any collisions with customers, reducing the stores' liabilities.
  • Waiters and waitresses no longer need to balance 8 different plates on their arms, as Perssistant can aid them in delivering entrees with maximum efficiency, freeing time to provide better personal service to their diners.

How we built it

We salvaged parts from an old wheelchair in combination with our own motor controller and microcontroller. Our first step was figuring out how to combine all of these in a safe and comfortable design. We bought a couple 2x4's and drew up a frame that would give us enough space for both the human and all of the electronics. Once constructed, we joined a rectangular wooden box with the front wheel mounts and the rear motor mounts with enough space in between to comfortably sit. Our robot uses independently driven rear wheels and pivoting front wheels. This meant that our seat had to mounted near the rear to prevent wheel slippage. Once the general frame was constructed and wheels mounted, we began to tie in all of the electronic components so that they were both out of the way and functional. The laptop underneath the rider does all of the image processing from the kinect mounted up front and receives the data sent via wifi. It communicates with the arduino which uses PWM to communicate with the motor controller which drives each motor separately. Over on the software side, the robot acts relatively the same no matter what drive mode it’s in. When using the app, the phone will communicate certain values over Wi-Fi with the computer depending on what it’s supposed to do, and the computer will interpret those and write via a python script to the arduino. When in tracking mode the computer processing the images and based on those, sends the arduino the proper commands. The computer vision system is two-faceted. To track a human safely, we implement generalized object tracking using the depth data from the Kinect. This is performed with Python-OpenCV. As an added feature, we implemented a deep neural network based facial tracker that we found online (see credits). This makes the robot come to a stop if it sees a human face. Overall, we use a bunch of different tools that communicate seamlessly to provide a safe ride to a passenger or a helpful assistant to someone who needs it.

Challenges we ran into

Some of the major challenges we ran into included realizing that we wouldn’t have enough time to complete our initial idea of a wheelchair evacuation technology. We didn’t have very much experience with ROS and it was a very slow but interesting learning process that we just didn’t have enough time for considering the other challenges we were facing. We also ran into a few mechanical challenges with our motor mounts because they simply couldn’t handle the torque that the respective motor was producing. This was fixed by wedging a block between the motor and the frame and securing the two together around the block with zip ties. This reduced the range of motion of the motors and helped stabilize the ride. As far as software challenges went, we had to tread carefully when integrating open-source facial recognition software into our project to meet our own specific needs. Our Android application's biggest challenge was getting the application to communicate with the computer. It didn’t want to want to do some of the logical operations it needed to. As a group, we all learned from these challenges to complete a robot that we believe serves many purposes.

Accomplishments that we're proud of

Our biggest point of pride with this project is not necessarily the final product itself but rather our ability to pivot our original idea to create this submission. We had originally intended on using Simultaneous Location and Mapping (SLAM) in order to create a model of Perssistant's environment. While we made significant steps towards this goal including the from-scratch development of a clever wheel encoder using an infrared LED and a Mountain Dew 24-pack box, we ultimately decided that we would not be able to complete all of the requisite steps to use this technique by the end of our 36 hours here at HackISU. Accordingly, we set everything down and started the project virtually from scratch again with only about 18 hours to go in the event, leading us to the product we are presenting.

What we learned

With our team consisting of electrical, computer, and software engineers, all of us were exposed to different aspects of the hacking process that we never get to learn or practice in our typical academic coursework. The EE's on the team got to hone their programming skills, while as our lone SE was lucky enough to receive a crash course in electronics wiring.

What's next for Perssistant

Our original goal when we began this project was to use the SLAM model-generation technique to determine the quickest evacuation routes out of buildings for wheelchair-bound individuals. While such persons could still use our mobile app to drive themselves to safety, we'd ultimately like to develop a third operating mode for Perssistant which would enable a fully-autonomous evacuation.

We'd also like to optimize the chassis of the vehicle to interface better with whatever cargo loads or seats that a user wants their Perssistant to carry, whether it’s tools, food, or heavy-cargo.

Thank you!

We want to thank HackISU and all of the amazing sponsors that make this event possible. This was the second Hackathon for three of us and the first for one other, but we’re excited to attend many more thanks to your support.

Credits: Facial Tracking Network: https://github.com/oarriaga/face_classification

Built With

Share this project:

Updates