Fleet Command's answer, as currently written, is mostly correct. I don't want to duplicate that answer, so I recommend starting by reading that answer (in its entirety).
However, there is one point that should be elaborated on. (I figured this would take more characters than a comment, so I'm adding it as an answer.)
To evade this detection, the man-in-the-middle must somehow procure the original secure server's digital certificate's key pairs.
That is just one approach. There is another approach: The "man-in-the-middle" could simply use its own certificate. This technique is being performed by many organizations (including commercial companies, public schools, etc.)
For example, you could have a firewall that provides "HTTPS-filtering" capabilities. This firewall may receive the outbound HTTPS traffic and, instead of routing it to a website, the firewall may just act like it is the website. Then the firewall establishes its own HTTPS connection to the website, retrieves the data from that website, and passes on the data to the web browser (spying on the traffic, and making any desired filtering/changes, as the firewall desires).
The challenge with this is that unless the "man-in-the-middle" device (the "firewall" in this example) handles the key problem, then the web browser will know that the data is not coming from the website that it tried to reach. The way the web browser knows that the website is what it wants to reach is by using SSL technology. (Historically, although the S in HTTPS meant "Secure", it was also technically accurate to think of it as "HTTP over SSL". Although, nowadays, that's usually "HTTP over TLS".)
One way to handle this challenge is to get the website's private keys, as noted by Fleet Command. However, there is another way.
When the web browser receives an SSL certificate (which contains some information, including an SSL key), whether that comes from the actual website or from the "man-in-the-middle" device, the web browser looks to see whether that SSL certificate should be trusted to identify itself as the website. A common way to do this is for the web browser to look in its certificate store. Let's see three different scenarios of what happens when someone tries to visit a website called Example.com, and then you'll see the other possible weakness that lets HTTPS filtering work.
- If your website says "I am Example.com and you know this because of my SSL certificate which you can tell is endorsed by GoDaddy.com", and if your web browser uses a certificate store that has a certificate that says to trust everything endorsed by GoDaddy.com, then the web browser is satisfied, and you don't complain.
- If you go to a restaurant and connect to a Wi-Fi device that is trying to use "man-in-the-middle" techniques to spy on you, and the Wi-Fi device says "I am Example.com and you know this because of my SSL certificate which you can tell is endorsed by Cyberoam". However, your web browser doesn't contain a certificate that says it should trust Cyberoam. As a result, the web browser shows the user a warning that the communication with the website is not trustworthy because the certificate doesn't appear to be valid.
- However, then you go to work and you work says that your computer needs to join the Active Directory domain for security reasons. You agree, and then your computer trusts the network's "domain controller" to make whatever security configuration changes are desired. Work wants to control HTTPS traffic, and so the domain controller specifies that the Cyberoam certificate should be installed to your computer's certificate store. Now, not only is your work able to spy on your HTTPS traffic (without you really knowing about it), but so can that Wi-Fi device nearby the restaurant.
In general, the attitude of many organizations is "we want to control things, and don't care so much about whether the end users have the privacy that they desire". Here is another example of this same sort of thing happening: Security.StackExchange.com question: "My college is forcing me to install their SSL certificate".
I think of how many people just choose "I agree" without reading the forms and understanding the security implications. As long as most average people will just cooperate with steps when an organization says "this is required", and as long as organizations tend to have this controlling attitude, the market seems ripe for companies to keep selling equipment which is designed to be able to snoop on HTTPS traffic using private keys, and for organizations to keep those private keys installed onto machines (so that the equipment can effectively do the intended task of snooping on HTTPS traffic).
Now that I've discussed the tech, effectively answering the question in the title ("To what extent is filtering HTTPS traffic possible?"), let me provide a straightforward answer to the other question I see:
can embedded HTTPS content such as YouTube videos be blocked from loading on websites?
As Fleet Command's answer noted, the firewalls could simply notice the destination IP address is related to YouTube, and block the traffic there. The end user would know that the traffic is blocked, because the page wouldn't load.
If MITM techniques are being deployed, then a device could theoretically allow a web browser to get some content from the website, while other content could be changed (including being blocked). For instance, video might be allowed, but comments could be changed (or vice versa). The end user would likely be oblivious to what is happening.