45 research outputs found
A sketch-based interface for photo pop-up
We present sketch-based tools for single-view modeling which allow for quick 3D mark-up of a photograph. With our interface, detailed 3D models can be produced quickly and easily. After establishing the background geometry, foreground objects can be cut out using our novel sketch-based segmentation tools. These tools make use of the stroke speed and length to help determine the user’s intentions. Depth detail is added to the scene by drawing occlusion edges. Such edges play an important part in human scene understanding, and thus provide an intuitive form of input to the modeling system. Initial results and evaluation show that our methods produce good 3D results in a short amount of time and with little user effort, demonstrating the usefulness of an intelligent sketching interface for this application domain
Motion parallax for 360° RGBD video
We present a method for adding parallax and real-time playback of 360° videos in Virtual Reality headsets. In current video players, the playback does not respond to translational head movement, which reduces the feeling of immersion, and causes motion sickness for some viewers. Given a 360° video and its corresponding depth (provided by current stereo 360° stitching algorithms), a naive image-based rendering approach would use the depth to generate a 3D mesh around the viewer, then translate it appropriately as the viewer moves their head. However, this approach breaks at depth discontinuities, showing visible distortions, whereas cutting the mesh at such discontinuities leads to ragged silhouettes and holes at disocclusions. We address these issues by improving the given initial depth map to yield cleaner, more natural silhouettes. We rely on a three-layer scene representation, made up of a foreground layer and two static background layers, to handle disocclusions by propagating information from multiple frames for the first background layer, and then inpainting for the second one. Our system works with input from many of today''s most popular 360° stereo capture devices (e.g., Yi Halo or GoPro Odyssey), and works well even if the original video does not provide depth information. Our user studies confirm that our method provides a more compelling viewing experience than without parallax, increasing immersion while reducing discomfort and nausea
A novel walk-through 3D display
We present a novel walk-through 3D display based on the patented FogScreen, an "immaterial" indoor 2D projection screen, which enables high-quality projected images in free space. We extend the basic 2D FogScreen setup in three major ways. First, we use head tracking to provide correct perspective rendering for a single user. Second, we add support for multiple types of stereoscopic imagery. Third, we present the front and back views of the graphics content on the two sides of the FogScreen, so that the viewer can cross the screen to see the content from the back. The result is a wallsized, immaterial display that creates an engaging 3D visual.</p
