That's a pity, because at its fundamental level, 'digital' is anything but difficult.
So how does digital work? It's simply a different way of representing some audio or video signal with numbers. Take an audio signal for example.
In the air, sound is made by pressure waves propagating from the musical instrument, or whatever, to your ears. Or to a microphone. In recording, the pressure wave moves a microphone's diaphragm in fairly exact proportion to the pressure of the wave. That, through various different mechanisms depending on the type of microphone, produces an electrical signal. Once again, this is in quite exact proportion to the original pressure wave.
Analogue sound is simple. The essence of it is that the electrical signal is preserved in as close to the same exact representation as possible, no matter what medium it is carried on between the recording studio and your loudspeakers. Well, kind of. 'Close ... as possible' is phrase with a wide range of meanings. A body dangling from the scaffold is as close as possible to being alive for one who has just been hung until he is dead.
Every transfer of the electrical representation of the original air pressure wave from one form to another results in a slight difference being introduced. Some are more slight than others. In general, when the signal is amplified in your home theatre receiver, the difference is very slight indeed. That's what specifications like 0.05% total harmonic distortion mean. But when the electrical signal is transferred to grooves in a vinyl record (to, ahem, cut a long story short) the difference is rather less slight -- say, 1% THD. It's somewhere in between for high quality magnetic tape, and rather higher when the vinyl record is played back.
To understand, and accept, digital signals requires us to accept that analogue signal transfer is not perfect. What digital allows is for us to embrace those imperfections, draw a line in the sand on them and say, 'This far, and no further.'
You see, digital signals are inherently inaccurate. Instead of trying to transfer an analogue signal from form to form, device to device, perfectly (but always doomed to fall short), it imposes a defined level of imperfection on the signal, and then absolutely refuses to let it get worse.
It does this by digitising the electrical signal. To simplify matters, we'll work with the digital format used for CDs.
Precisely 44,100 times every second the level of the analogue signal is measured and recorded. But not exactly. That is because a 16 bit number is used to record the level. With 16 bits you have 65,536 levels you can use (try it for yourself: 2 raised to the 16th power). So clearly there is inaccuracy. We are not even attempting to measure the level in between each of those 44,100 'samples' per second. And those ones that we do generally can't be measured precisely. The actual level of the signal might be 12,145.458. In that case, we will just round the number off to 12,145. And months or years later when you're playing back the CD, the signal will come out at that precise point at an analogue equivalent of 12,145, rather than 12,145.458. There you go, the signal is distorted even on the CD!
Even so, the great thing about digital is that value of 12,145 will never vary. It is composed by a 16 bit number, constructed of 1s and 0s. Each 1 and each 0 can only be that value. If something slips a little in the circuitry and 1s are coming out at 0.9, and 0s are coming out at 0.1, it doesn't matter. They will still be read as 1s and 0s. Your 12,145 will remain 12,145 until the CD disintegrates with age.
So the virtue of digital is not that it is distortion free, but that a pre-chosen level of distortion will never, ever, get worse.
Don't worry too much about that distortion anyway. Referenced to the full 16 bit scale, the sample error is only 0.0015%, which is better than the best analogue system.