As others have mentioned, the current HTTP protocol provides limited support for this.
There's the DATA URI scheme that allows you to encode binary objects like images into base64 and inline them, effectively combining the HTML and object into one file. This reduces cache-ability but can still be worthwhile for small objects by reducing the number of short-lived connections and by reducing the transfer of uncompressed headers. Google's mod_pagespeed Apache extension performs this trick, amongst others, automatically for objects of 2k and less. See http://code.google.com/intl/nl/speed/page-speed/docs/filter-image-optimize.html
HTTP Pipelining/keepalives are useful but, as Koos van den Hout mentions, only work on objects with known size. Furthermore, pipelining and gzip compression do nothing about the uncompressed transfer of headers and cookies.
An interesting development is Google's research project called SPDY, which does almost what you suggest. Among others, it interleaves multiple HTTP requests over a single TCP connection, interleaving and prioritizing the resources. It also compresses the entire stream , including headers and cookies. Tests have shown around 50% reduction in page load times, so I'd definitely keep an eye on this project.
Google's Chrome browser is already using the SPDY protocol when communicating with Google sites like Gmail and Search. You can find some internal diagnostics by entering about:net-internals in the location bar.