Flash memory chips are divided into 528-byte pages which may only be written if blank, which are grouped into chunks of 64 to 1024 (or possibly more); erasing any page requires erasing all of the pages in the same chunk.
Flash drives that are intended to allow 512-byte sectors to be written and rewritten in arbitrary sequence need to use rather complicated mapping and wear-leveling algorithms to accommodate this. Writing a sector won't immediately erase the old data, but will instead write the new data in some other location that is known to be blank, and cause the old data to be marked as obsolete. If the supply of blank pages gets too low, the system will look for blocks whose contents are mostly obsolete, copy any live pages from that block into some of the remaining blank space (marking each copied block as obsolete), and then erase the block once all of the pages in it are obsolete.
All of this extra mapping logic increases the complexity of the controller, and further reduces the fraction of the flash that is usable for storing useful data. By contrast, if a drive is designed for read-only access to a volume that will be written once during mastering and will never be written again, it can use a simpler controller and won't need to dedicate a substantial chunk of the flash (often 10% or so) to a relocation tables and slack space.
I don't know that any particular vendors ship software on purpose-designed "read-only" USB drives, but it would certainly make sense for them to do so. While doing so would eliminate the possibility that the drive associated with a particular machine might be repurposed once the machine is no longer needed, it would make the drive more suitable for its intended purposes as a "known good" recovery medium.