Tuesday, April 22, 2014

"Sonar"


Sonar

This “Sonar” functionality allows users to test distances to the nearest objects. This helps users determine their distance from important objects. This function mimics the use of reverberation to calculate the proximity of objects. In the virtual world, we can convey this information even when the object is very far away.

Use case:

User is navigating room, attempting to follow wall. The user loses track of wall on right-hand side. The user continues navigating but becomes lost. The user suspects that they are moving towards the North wall, which is where the exit is. They query around themselves but no objects are within range. The user tries to find the wall by sending a “sonar” signal forward. After a short delay, they hear a “knock” coming back from the front. From experience, they associate this “knock” with an interior wall. The user estimates their distance from the wall based on the length of the delay. Comfortable that they have made progress towards the north wall, the user continues until they reach the wall, takes a right turn, and follows the wall to the exit.

Calculating Delay

Calculating the sound delay is a simple matter of determining the distance between the user and the object and accounting for the speed of sound. Where a and b are 3D points representing user and object positions:


        double dist = Math.sqrt( Math.pow((a.x - b.x ), 2) + Math.pow((a.y - b.y), 2) +                    Math.pow((a.z - b.z), 2) );;

        double delayAsSeconds = dist/speedOfSound * 2;
        double delayAsMilliseconds = delayAsSeconds * 1000;


The play thread is then qued with the appropriate delay and sound. The sound is always the one associated with the relevant object.

Tuesday, April 1, 2014

UI for Android: Touchscreen




I have createda gesture listener class to handle these.

You have to manually set what constitutes a "swipe"-- how far the user has to move their finger (both min and max). You have to set the speed at which they move their finger, so as not to conflate with a "scroll":
  private int swipe_Min_Distance = 100;
    private int swipe_Max_Distance = 800;
    private int swipe_Min_Velocity = 40;
Then you catch the probable swipe as a "fling." You make sure it's a swipe and determine its direction--manually setting the leeway.
@Override
    public boolean onFling(
        MotionEvent e1,
        MotionEvent e2,
        float velocityX,
        float velocityY)
    {
        final float xDistance = Math.abs(e1.getX() - e2.getX());
        final float yDistance = Math.abs(e1.getY() - e2.getY());

        if(xDistance > this.swipe_Max_Distance || yDistance > this.swipe_Max_Distance)
         return false;

        velocityX = Math.abs(velocityX);
        velocityY = Math.abs(velocityY);
              boolean result = false;

        if(velocityX > this.swipe_Min_Velocity && xDistance > this.swipe_Min_Distance){
         if(e1.getX() > e2.getX()) // right to left
          this.onSwipe(Globals.SWIPE_LEFT);
         else
          this.onSwipe(Globals.SWIPE_RIGHT);
         result = true;
        }
        else if(velocityY > this.swipe_Min_Velocity && yDistance > this.swipe_Min_Distance){
         if(e1.getY() > e2.getY()) // bottom to up
          this.onSwipe(Globals.SWIPE_UP);
         else
          this.onSwipe(Globals.SWIPE_DOWN);
         result = true;
        }
         return result;
    }
Have to differentiate between single taps, long presses and double taps:

 @Override
    public boolean onSingleTapConfirmed(MotionEvent e)
    {        float scaledX, scaledY;
        gs.touchCell(scaleX(e.getX()), scaleY(e.getY()), false);
        return true;
    }

 @Override
    public boolean onDoubleTapEvent(MotionEvent e)
    {
        gs.startListening();
        return true;
    }

Tuesday, March 11, 2014

IRB stuff

Purpose of the research study: The biggest barrier to the independence of the visually impaired is often the problem of mobility. Mobility training is difficult and intimidating.
The project uses publicly available 3D models of buildings, towns and cities to automatically generate audio-based “maps” for the blind. The visually impaired can then use these “maps” to explore environments virtually. These “maps” use 3D audio and text-to-speech to convey information to the user.  With this software, users become accustomed to the layout of the environment and memorize routes they can use in real-world navigation. While this technology is most beneficial to the visually impaired, this study is also open to sighted individuals.
These maps can offer varying level of detail; the purpose of this study is to discover whether or not more detailed audio information leads to better recall of layouts and routing.


What you will do in the study: Using an Android tablet and headphones, you will explore a 3D audio environment for a limited amount of time. To encourage focus, you will be blindfolded while exploring. After a tutorial, you will be expected to explore independently. When the time limit has passed, you will be asked a series of questions about the environment you explored.


Time required: One to two hours continuously.

Saturday, March 8, 2014

Design: Functional Modules

Functional Modules

Explorer
Knows about: Player, input manager, move manager, query manager
→ Receives sanitized, platform-agnostic input from input manager
→ Makes appropriate calls to query/move manager


Movement Manager
Knows about: Player, Room
→ Collisions/Bounds
→ Movement w/ wall-following (auto-correction)
→ Turning
→ Forward movement
→ Makes calls to produce footsteps & collision noises
→ Handles exceptional movement (e.g. steps, elevators, room-to-room via doors)

QueryManager
Knows about: Player, room
→Makes call for Sonar
→ Makes calls for Reverb
→ “nearby” queries (left/right/front) (also as TTS)
→ Semantic info (TTS)
→ Later: makes calls to routing

SoundManager
Knows about: OpenAL
  • OpenAL info (Listener position)
→ Sets and updates OpenAL info
→ Produces 3D sound
→ Produces “Sonar”
→ Produces Reverb


Design: Object Classes

Object Classes

Player (Global Singleton)
  • Room
  • Position
  • Orientation (e.g., 90 degrees, relative to North)
  • Preferences (height, stride length)

Room
  • Entrances/Exits (and associated “starting positions”)
  • Polygon representing room bounds
  • Walls
  • Floor (material)
  • Dimensions
  • Stray Objects (furniture, etc.)
  • Room name
  • Purpose (semantic info)
  • Parent building
→ Given collision coordinates, returns object collided w/
→ Given step/query coordinates, returns object queried

Building is a Room
  • Entrance/exits
  • Exterior walls
  • Rooms
  • Name
  • Type/purpose (semantic info)
  • Environment (exterior)
  • Lat/Long

Wall - - Mimic tree structure of gml
  • Windows
  • Doors
  • Angle (relative to North)
  • Adjacent rooms (N/S or E/W)
  • Material (wood, brick, etc.)
  • Color
→ Produce angle

Door
  • Material
  • Type (sliding door, automatic door, etc.)
  • Adjacent rooms

StrayObject (furniture, computer terminals, etc.)
  • Description
  • Position


Tuesday, February 25, 2014

Custom Engine

I will be rolling my own custom engine.

Needs
* Search, routing
* Free-form navigation
* 3D audio output
* 2D mapping output
* GPS recognition

Implementation
* Java
* CityGML4J
* Android Platform
* Possibly C++
* MongoDB

Challenges
* Good 3D audio with effects (requires C++)
* Fast routing
* Variety of searches and ways to handle data
* Reasonably handling huge GML files
* Splitting up landscape
* Interface: supplying info to user (semantic and physical) purely through audio.
* Interface: Touch-screen that is blind friendly
* (These interface problems are partly addressed by my prototype)

User Study Continued

Optimal User Study
* Large volume of visually impaired persons
* App automatically converts model based on real-world version
* One group uses app to practice new space; control group does not
* Both groups navigate new space: Previously explained & surprise routes
* Study intends to show that automated conversion produces a map which does assist in real-world navigation.

Compromises:
* Sighted individuals
* Questionaire instead of real-world navigation
* Model is based on hypothetical building