Daniele Bonatto

prof_pic.jpg

LISA - Laboratories of Image, Signal processing and Acoustic Image Research Unit

Av. F.D. Roosevelt 50, CP 165/56 - 1050 Bruxelles - Belgium

Explore my detailed About page to delve into the fascinating world of immersive content creation! Let’s talk about view synthesis — it’s like having a magic wand for photos, allowing you to create new perspectives and views of a scene from different angles. This technology is a game-changer for virtual reality, gaming, and immersive experiences, adding depth and realism to visuals.

I’m on a mission to push the boundaries of real-time 3D computing. Armed with a master’s degree in computational intelligence software and robotics engineering from Université Libre de Bruxelles, I’ve completed my Ph.D. through a joint effort at the French and Dutch wings of the Free University of Brussels.

My contributions have already made a significant impact in academic circles and industrial applications, especially within the Moving Picture Experts Group (MPEG) community, where three of my software solutions are extensively used. Additionally, my work has played a crucial role in the HoviTron European project, where my advanced view synthesizer plays a pivotal role and is integrated within the CREAL headset technology. This headset uniquely addresses the eye accommodation challenge, eliminating the necessity for eye tracking or artificial blurring around the object of interest. The headset is used to pilot a robotic arm through the network, showcasing the industrial applications of my technology.

In the realm of view synthesis, I’ve achieved a milestone with the creation of a real-time view synthesis software, the Reference View Synthesizer (RVS) that rivals the quality of NeRF. This groundbreaking software, while maintaining a fairly low GPU footprint, tens of megabytes of memory and only 5% of GPU compute usage, offering photorealistic results in real-time and reshaping the landscape of immersive content creation.

Another notable accomplishments includes the development of a high-precision depth estimation software: Reference Depth Estimation software (RDE), addressing both the demands of high-quality rendering and the computational challenges involved in generating detailed depth maps efficiently. High quality depth maps are a prerequisite for high quality view synthesis.

Let’s not forget my exploration into plenoptic cameras, especially the plenoptic 2.0 camera. I’ve developed innovative solutions for extracting subaperture views and pattern-free intrinsics parameters estimation, providing users with diverse and dynamic perspectives that redefine their interaction with digital content. These cameras allow us to play with micro-baselines, perfectly paired with the RVS’s ability to synthesize novel views in millimetric to medium baselines.

Other explorations include the development of a real-time point cloud viewer employing splatting algorithms, exploration of Gaussian processes optimization, delving into deep learning for view synthesis, construction of robotic benches for acquiring sub-mm precision images, and proficiency in web technologies like React and native-React. The technical stack involved encompasses familiar platforms such as Linux, Windows, Docker, PostgreSQL, and more.

As I keep evolving and refining these technological wonders, my journey is leaving an indelible mark on the immersive content creation scene. Stay tuned for more exciting developments as I explore new horizons and continue to advance the cutting edge of 3D computing. Cheers to the future of immersive technologies!