2

I am planning to setup a poor-man's ADC using a Raspberry PI and an operational amplifier wired as a double-slope integrating ADC for extremely low-frequency signals sampling (< 10 Hz). I have hence imagined I could use Linux performance counter to time the integration process.

The devil here is latency. Since the PI can also manage edge-triggered interruptions at the kernel level, I thought of signalling the end of the integration process with an edge from the comparator output.

Double slope integrator and comparator

The performance counter would only need to be initialized when an ADC conversion is required and read as soon as the interrupt is triggered. I would probably extend the kernel module to implement a poor-man's ADC and store the conversion result in /sysfs, for instance.

Has anyone come up with a similar solution? Does it make sense at all to do it that way?

  • I'm not sure what you are trying to do. You seem to be trying to "count" something between two triggers. Is the thing you are trying to count time? I.e. Are you trying to find the time between triggering a device and the device signalling it is ready? – joan May 02 '15 at 10:56
  • Exactly. The performance counter should start when the strobe is applied and stop upon receiving the interrupt signal. The conversion result should be n = (counter - a) / b with a being the averaged pulse count of the first slope and b a normalizing ratio. Just that the performance counter is a high resolution counter (up to the microsecond). –  May 02 '15 at 11:02
  • 1
    You can get the time between two gpio events from userland accurate to about 2µs on average (using my pigpio library. I don't believe you will get timings more accurately from userland. If you write your own kernel module you may get more accuracy, I'm not sure, it would depend on knowledge of Linux interrupt handling which I do not have. – joan May 02 '15 at 11:23
  • 1
    Yup, I know about user-land timing. However as I read here on SO, a high jitter is to be expected due to the non-realtime nature of Linux. Hence I considered timing from inside the kernel using the performance counter and an interrupt routine, which I linked to in my question. –  May 02 '15 at 11:27
  • That's why pigpio doesn't rely on Linux timing or interrupts to read the gpios. pigpio gpio reads and timing are handled by the DMA hardware which is pretty much immune from Linux scheduling problems. – joan May 02 '15 at 11:30
  • Sure, however user space introduces a notable and variable latency (like 7 to 25 microseconds) between the moment the GPIO pin is set/reset and the program detects the change. This said your library looks very interesting, I'll take a closer look . I'll also try comparing it with my idea. –  May 02 '15 at 11:37
  • 1
    Erratum: not 7 to 25 µs but 25 to 75, as illustrated here. –  May 02 '15 at 11:55
  • Generally I'd say a minimum of 50µs for a notification to reach userland but it is some time since I tested. I repeat pigpio does not work that way. pigpio continually samples gpios 0-31 (200 thousand times per second by default). The changes are microsecond time-stamped, buffered, and emitted every millisecond. With the default setting there is a +/- 5µs accuracy of each events timing, i.e. the time between two events is accurate to +/- 10µs. The sampling rate can be increased to 1 million samples per second for a +/- 2µs accuracy. webm video – joan May 02 '15 at 12:04

0 Answers0