The effect of these “3D glasses” (Anaglyph 3D) is quite simple actually. By filtering the red channel on one eye and the green and blue channels on the other eye you can fake stereographic rendering similar to how a VR headset works.
Getting this effect into some Unity apps was easy, too – the Anaglyph3D package contains a very easy-to-use drop-in shader for URP projects, and in some cases (mainly our HDRP-based big stage-display), we instead just rendered out two cameras in the scene on top of each other which gave us an idea… could we not just do that IRL as well?
Yes – let’s have more installations! Why not have a kind of mirror that reflects what you see… but in 3D! (The cool 3D, not the regular physical one that boring real mirrors do.) Our idea was to mount two identical webcams quite close to each other and overlay their video streams - exactly like we did in the Unity scene. But it turns out that was more difficult than expected.
Our first idea was to have an old Raspberry Pi run this whole thing and use FFMPEG to read the two streams and overlay them. They actually have a built-in filter for this: A quick ffmpeg -pixel_format yuyv422 -i /dev/video0 -pixel_format yuyv422 -i /dev/video2 -filter_complex "hstack,stereo3d=sbsl:arcg,format=yuv420p,framerate=30" -f xv display
should do the trick. As it turns out, the latency is quite bad, however, even after switching to way more powerful hardware. Not only is the video several seconds delayed – the two streams are delayed by a different offset! Wearing 3D glasses, it's like your brain is being put through the wringer. Not exactly the effect we wanted.
Okay, second approach. A few years ago, some of us have been looking into GStreamer and all the things you can do with it. As its focus is real-time streaming, it seems like a better fit. If you thought the ffmpeg command above was a bit long, get ready for this:
gst-launch-1.0 \
v4l2src device=/dev/video0 name=redd \
v4l2src device=/dev/video2 name=cyan \
glstereomix name=mix \
redd. ! image/jpeg,width=1280,height=720,framerate=30/1,format=MJPG ! jpegdec ! videoconvert ! video/x-raw,width=1280,height=720 ! glupload ! mix. \
cyan. ! image/jpeg,width=1280,height=720,framerate=30/1,format=MJPG ! jpegdec ! videoconvert ! video/x-raw,width=1280,height=720 ! glupload ! mix. \
mix. ! video/x-raw'(memory:GLMemory)',multiview-mode=side-by-side ! \
queue ! glviewconvert downmix-mode=1 ! glimagesink
This does pretty much the same thing. Except it’s super fast! And also the wrong colors! No matter what downmix-mode
we chose, it was always magenta and yellow. We tried using a custom fragment shader with glshader
but sadly that only gave us version issues (regardless of the specified version in the shader file).
Okay, we have two approaches that we tried and that failed and the party is approaching soon – what should we do? We could continue debugging, or… It is at this point in any project that we usually ask: How difficult is it really? Can’t we just do it ourselves?
The way you access webcams on Linux is usually through the Video4Linux driver. It’s a pretty neat abstraction, and there is a Rust library (of course called v4l) that gives pretty direct access to everything. Best of all, one example of that library is about rendering a video stream to an OpenGL context. That’s 90% of what we need!
Let’s just add one more thing – opening two webcams. At that point, we also saw a similar delay between the two cameras again, as if they were just buffering more and more. That is because they are: Each reader thread just pushes image frames into a channel. We don’t want that – so let’s just write to a buffer behind a mutex and read from that. A little color masking and mirroring when drawing the two textures and voila! We have a super low latency high-resolution 3D mirror.
Of course, we had to install it at some strategic locations in the office. One was on a big vertical screen on the side. Another one was, in typical Techno Creatives fashion, mounted to an industrial robot arm above the main entrance door with the cameras pointed to the bar area.
If you’re curious about the code: We published it here on GitHub. Let us know if you use it for your own party or installation! Or if you have any idea what else to put on that robot arm – please reach out.
PS: If you didn't attend the party but want one of those stylish Techno Creatives 3D glasses, come by the office and talk to us!