Sensor Suit

The sensor suit is a layered, multidiscipline idea for creating a light power armor. While I do no expect to achieve anything truly useful, at least Layer 1 would be a great experiment in nontraditional user interfacing.

Layer 1

Head mounted equipment:

  • EEG (isolated power, Mindflex based)
  • Eye Tracking (Ardueye? Analog IR ratio? Hackaday has covered multiple)
  • Selective noise cancelling headset/microphones
  • data link? (Helmet may be independent due to safety concerns)
  • IMU w/ GPS
  • 360 degree camera
  • Look Down Display (Test unit 1 will be Myvu Crystal glasses)

Body Equipment:

  • Joint position sensors (may be magnetic encoders or based upon multiple IMUs)
  • EMG
  • Temperature sensors
  • Haptic feedback

Miscelaneous:

  • Battery pack
  • Main CPU (Test unit 1 will be Raspberry Pi)

Layer 2

  • Haptic feedback actuators (magnetostrictive or muscle wire)

Layer 3

  • support mechanics (load transfer to ground)
  • Self supporting structure

Layer 4

  • External sensors

The idea behind the head mounted equipment is to provide a more immersive (for AR) type control helmet that does NOT require any hand access to do basic operations. This requires an "eye mouse" and voice control at minimum. I hope to use EEG signals to add additional quick reference for activations (say automatic zooming of an object the eye is focusing on due to concentration detection).

Expanding this down to the body requires tracking major joints (and the hands) which may be easier to do with independent IMUs instead of structured angle sensors. Add in body tracking sensors (temperature, stretch, EEG) both intent and condition of the wearer can be determined. This may be useful, for instance, to make sure that the suit doesn't overheat the operator.

Layer 2 adds haptic feedback. This turns it from a self sensor suit to a world suit, allowing remote operation of a machine. I see a combination of vibration motors, muscle wire.

Layer 3 adds that remote machine over the user. The only goal would be 10 minutes of untethered runtime with the mass of the superstructure and motion taken off the operator. Basically a giant robot costume. EMG interfacing allows the system to keep up with operator movements.

Layer 4 adds sensors on top of the machine. This allows enclosing the operator in the machine and providing both sensors back to the operator and semi-autonomous actions (say non-visible obstacle avoidance due to a laser scanner in the shin area).

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License