Friday, February 8, 2013

Ideas and Some Progress

Currently, I am very far behind in implementation for both projects. That has been a result of not having the devices to work on my projects. Fortunately, for the next few days I do have the a Leap Motion to work to do some testing (I borrowed it from friend as a temporary fix). Hopefully, I will get to do some basic testing.

Some ideas I have for moving forward with the Leap Motion are:
  • Using multiple leaps- The idea is to have a Leap-Box. Essentially, a user can put their hand into this box and multiple leaps positions around in a circle will be used to detect individual finger motions. The hope is that with the extra spatial information, isolating finger motions will be easier. Likewise, identifying the bends of the finger will be as well.
  • Splitting an angle of motion across the finger joints- as an early prototype (or potentially method for dealing with unknowns like finger segment lengths and rate of angle change), it would be useful to assume that all finger joints bend the same amount as the finger is bent. Obviously, this is not the case (different finger segment lengths can var the angles) and bending the finger is not a simple motion- each segment is potentially independently bendable and they do not bend at once normally. The base of the finger (where the finger is connected to the knuckle) does first; followed by the middle segment; then the tip segment. I will not get into the thumb at this moment because its range of movement is much larger, posing more problems that will be dealt with later.
  • Building a few prototypes or examples would be very helpful. 
This weekend, I am going to get started on a new proposal for the Leap project. It may be wishful thinking, but hopefully something will be able to come from it.

For my Nuisical Project- I was able to get in touch with David Yang, a former Digital Media Design student, who also did his senior design on NUIs. He also used the Kinect, but his implementation dealt with the physical manipulation of shapes in 3D space. The chat between the two of us is pretty long but  the take away points are:
  • Break down movements into small action- by using velocity, relative change in position, etc. It is possible to identify specific movements and store them as 'recognizable gestures.
  • Use hip position as a standard marker for gesture detection- the Kinect is very good at identifying the location of the hips in space. David suggest using this as the basis for comparing my gestures/movements. 
  • Establish a database of gestures- put simply, we need to have previous gestures/a database filled with gestures that are recognized for comparison purposes.

No comments:

Post a Comment