Skip to content

kuanting/vivid

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VIVID - Virtual Environment for Visual Deep Learning

VIVID (VIrtual environment for VIsual Deep learning) is a photo-realistic simulator that aims to facilitate deep learning for computer vision. VIVID supports four different characters: robot (mannequin), simple drone, AirSim drone and automobile. Twelve large, diversified indoor and outdoor scenes are included. In addition, we create NPC with simulated human actions to mimic real world events, such as gun shooting and forest fire rescue. VIVID is based on Unreal Engine and Microsoft AirSim. Since 2018, we have developed many interesting simulation environments for traffic and drone simulation. More information can be found on AIoT Lab's website www.aiotlab.org.

Download Simulation Games

We have developed 3 drone related games: Drone Resue, Drone vs Tank, Human vs Drone. The games can be downloaded via the link below:

Citations

If you like our work, please cite the papers below:

@inproceedings{lai2018vivid,
  title={Vivid: Virtual environment for visual deep learning},
  author={Lai, Kuan-Ting and Lin, Chia-Chih and Kang, Chun-Yao and Liao, Mei-Enn and Chen, Ming-Syan},
  booktitle={Proceedings of the 26th ACM international conference on Multimedia},
  pages={1356--1359},
  year={2018}
}
@inproceedings{lin2023traffic,
  title={Traffic Object Detection in Virtual Environments},
  author={Lin, Bo-Yi and Huang, Chih-Sheng and Lin, Jia-Ming and Liu, Pin-Hsuan and Lai, Kuan-Ting},
  booktitle={2023 International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan)},
  pages={245--246},
  year={2023},
  organization={IEEE}
}
@INPROCEEDINGS{10227067,
  title={Augmented Reality for Real Object Detection}, 
  author={You, Wei-Xuan and Huang, Chih-Sheng and Hu, Kai-Ming and Liu, Tzu-Hsin and Lai, Kuan-Ting},
  booktitle={2023 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan)}, 
  year={2023},
  pages={803-804},
  organization={IEEE},

Architecture

The architecture of VIVID is shown below. Our system is powered by Unreal and leverages AirSim plugin for hardware simulation and control. The remote procedure call (RPC) is used to communicate with external programming languages. Currently VIVID supports four different characters: robot, simple drone, AirSim drone and automobile. User can select characters and scenes by using in-game menu.

Documentation

The documents and tutorials are in our GitHub WiKi.

Human Actions

Some examples of human actions in VIVID. The actions from left to right are shooting, dying, jumping, walking, surrendering, moaning in pain, running, police running with rifle, crouching and dancing. Most action models can be downloaded from Maximo. Human Action Examples

About

VIVID - Virtual Environment for Visual Deep Learning is an open-source project developed by AIoT Lab (http://www.aiotlab.org)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages