Links

Tools

Export citation

Search in Google Scholar

Multitask Human Navigation in VR with Motion Tracking

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

Data from human subjects in virtual reality performing some combination of collecting targets, avoiding obstacles, and following a path. Raw data has been parsed into 300 ms samples for use in machine learning algorithms. The data includes object positions in the virtual environment, human position tracking, and task instructions. ; Other ; Each .mat file is the data from a single subject. The general pRes (for "parsed results") structure has length 32, one for each of 32 trials. Each trial entry has n samples, where n is the number of 300ms slices the trial is parsed into. Within each trial: * trialNum = number of current trial (practice trials and calibration rooms are omitted) * taskNum: task instructions. 1= follow path only, 2 = avoid obstacles & follow path, 3 = collect targets & follow path, 4 = targets, obstacles, & path altogether * taskName = written form of taskNum * frameNum = frame within movie * agentX = current X coord of agent * agentY = current Y coord of agent (vertical height) * agentZ = current Z coord of agent * agentAngle = current agent's orientation (yaw in room coords) * agentMoveDist = amount that subject moved in this action * agentMoveAngle = angle that subject moved in this angle (relative to agentAngle) * targ = struct for targets, include targets positions at each frame, note that once the agent encounter the object, the object will disappear (successful collection) * path = struct for path segments, way points poisitions * obst = struct for obstacles, include obstacle positions at each frame, note that once the agent encounter the object, the object will disappear The structs for the 3 object types have the following, where m is the number of objects of that type in that trial * distList = Distances for each of the objects for each of n samples (n x m) * angleList = Angles for each of the objects for each of n samples (n x m). Angles are relative to the angle of the subject. * posX, posY, posZ = X, Y, and Z positions for each of the m objects in the n sampled frames. If you want to parse the data using Python, an example is at: https://github.com/corgiTrax/Sparse-Reinforcement-Learning/blob/master/human/data/parse.py