The domain of Event-Based Vision, which replicates the human eye's ability to register changes within a scene, offers significant advancements in terms of power efficiency, latency, and dynamic range. However, there is still lack of datasets when compared with RGB vision. There are some works for bridging the gap by creating events from videos, such as: https://github.com/uzh-rpg/rpg_vid2e . It would be interesting to create more performative and efficient method for the generation of events from videos, and various directions could be explored.
The domain of Event-Based Vision, which replicates the human eye's ability to register changes within a scene, offers significant advancements in terms of power efficiency, latency, and dynamic range. However, there is still lack of datasets when compared with RGB vision. There are some works for bridging the gap by creating events from videos, such as: https://github.com/uzh-rpg/rpg_vid2e . It would be interesting to create more performative and efficient method for the generation of events from videos, and various directions could be explored.
The primary objective of this project is to create a more efficient and performative learning-based method for Video to Events generation. Directions and/or techniques are chosen by student, hence this is a Master's thesis.
The primary objective of this project is to create a more efficient and performative learning-based method for Video to Events generation. Directions and/or techniques are chosen by student, hence this is a Master's thesis.
Nikola Zubic (zubic@ifi.uzh.ch), Marco Cannici (cannici@ifi.uzh.ch)
Nikola Zubic (zubic@ifi.uzh.ch), Marco Cannici (cannici@ifi.uzh.ch)