Update: April, 2016 The CMS collaboration has made about 300 terabytes of LHC data (and the tools and tutorials needed to understand it all) openly available. The data is reconstructed to the particle track level, so it is not actually raw, but there is still a lot to know to make good use of this data.
The raw data from modern particle physics experiments are many terabytes (even petabytes) in size, and quite complicated.
For collider experiments the detectors are compound, layered devices with three or more different technologies used by five or more distinct subsystems, plus ancillary monitoring of detector performance, temperature and humidity conditions in the experimental hall, data provided by the accelerator operating crew on the state of the beam, and on and on and on. There are ten of thousands of individual detector channels and hundreds of "slow" devices (like the thermoments, magnet currents, beam current monitors, etc...). All of this has been pre-filterd by the trigger hardware (and exactly what filtering was applied changes over time).
For neutrino experiments the data are detailed information about the charge detected from photo-tubes (some combination of total charge in a window, peak voltage, peak time, onset time, and/or digitized wave forms) for hundreds or thousands of PMTs. Plus environmental monitoring like that done by the collider people.
In both cases there scads of calibration data, changes in operating conditions throughout the data taking period, and sometimes replacement of re-tuning of sub-systems part way through.
There is typically many tens of thousands of lines of custom computer code for opening and processing the data files. Code written by physicists. Now, particle physicists are a little more professional about coding then some of their peers, but that does not mean state of the art process and beautiful code.
It generally takes many thousands of grad-student and post-doc hours to reduce this to something physics can be extracted from.
There is a reason we call this "Big Science".
That said, you generally can get the data. Eventually. (Each collaboration will hold theirs for a while to insure they get to publish first.)
How do you get it? Just ask.
But you'll have to provide your own storage (and possibly copying hardware); come to where the data are kept; understand that the documentation will be scattered over hundreds or thousands of internal (to the collaboration) documents written as they went along by diverse authors some of whom have English as a second or third language (and may evidence some idiosyncrasies); and that help interpreting all this will be terse as these people have moved on and have other projects keeping them busy. And you may have to convince the people with the data that you have the capacity to manage it.
The availability of partly processed data sets is not something I am as sure about, but you could try asking for that too. The worst that can happen is you get told "No". But even if you can get this, don't image that it is easy to work with.
If I haven't dissuaded you, let me suggest a practical method for getting started. Go to the nearest university that has a nuclear or particle physics group, and ask to help out. Really. There is always a need for lab monkeys, and you will learn as you go along because you can't do the work if they don't teach you stuff.
In the process you'll
- Learn how some of the sub-systems work. Get a feel for what kind of raw data they return and how it is processed into less raw data. If you ask people will tell you how the less raw data can be transformed into still more physics-like information and eventually reconstructed into particles.
- Make some contacts in the business. Begin able to say "I work with Prof. Smith and Podunk U." is much better than "I'm interested." when it comes to getting access to data.