Many TVs and even some Blu-ray players feature circuits that take 2D content and turn it into 3D. There are two parts to this process. The first is the employment of some algorithm to work out which bits of the image should be made to seem in the foreground
, which in the background, and the grading between the two. The second is actually generating the two eye views.
Neither of these is a trivial undertaking, and the most amazing thing about such circuits is that sometimes with some scenes they generate respectable results.
Consider the first job: the decision-making algorithm. A number of factors seem to be taken into consideration. These are judged by me purely by observation, and not all circuits place the same weight on the same parts. But the some of the things that tend to be used to determine foreground/background location are:
- screen placement: picture elements horizontally central and placed towards the bottom of the screen tend to be regarded as being in the foreground since that’s a first approximation of what happens in real life: the ground is low and close, the sky is high and distant;
- sharpness: picture elements that have cleaner, sharper edges are likely the objects of cinematographic interest, and are therefore more likely to be in the foreground;
- contrast: a picture element with a limited range between the light and dark on its surface tends to be pushed towards the background, since in real life distant objects are less contrasty than closer objects;
- colour: some circuits seem to bring larger areas of green forwards (since grass is green).
These are heuristics — time saving approximations — adopted by TV makers in an attempt to emulate the heuristics employed by our brains.
But heuristics are indeed mere approximations, and thus can be fooled. Which is why we as humans see optical illusions, and why 2D to 3D processing circuits frequently produce unrealistic results. For example, there’s a scene in the lovely movie Submarine in which the psychic guy’s van is in the foreground. It is in reasonably high contrast and central on the screen, so the 2D to 3D system I was using brought it to the front under the 1st, 2nd and 3rd dot points. But the van had painted on its side a New Age cosmic scene, rendered with an air brush. This was softly rendered and low in contrast, so the processing circuit pushed it into the background (points 2 and 3 overrode point 1). The scene was left with the van looking like a stargate to a distant universe.
High quality 2D to 3D converters take more than a processing chip and some heuristic rules. This video is well worth watching. (The embed link didn’t work.) It relates how Titanic 3D was done.
With great pains. The company involved used over 400 staff for 60 weeks to do the job, tracing out objects, assigning depths to them, filling in backgrounds.
It really is too much to expect an automated process to come anywhere near that.