This post was originally published at Walking the Wires
This blog post should be filed under the Public Service Announcement category. Recently, a customer contacted me with an issue he was observing when averaging a voltage measurement. He insisted that mean.vi was calculating the mean incorrectly!
The code we were using was:
The symptom was that the result of the mean operation seemed to always be the last value of the signal acquired.
After the denial phase of “there is no possible way that mean.vi is not calculating the average correctly” we did further investigation.
Wait! Say what? … yes, as you can see in the image, when an array of waveforms is converted into a 1D DBL array, only the last element of each waveform is taken to form the resulting array.
Since the DAQmx task in the original code was configured to a single channel, we fixed the issue by just changing from reading multiple samples from multiple channels to reading multiple samples from a single channel. Another option would have been to configure the DAQmx Read selector to output an array of doubles instead of an array of waveforms.
This little episode reminded me why I only use the waveform datatype when wiring directly to a graph or when using the waveform palette functions.
Happy wiring and beware of mean waveform data type coercions 😉
Fab
Thank you Fab, that’s a good one. Testing code with reference data and known results is so important.
Happy wiring,
Adriaan
Adriaan,
You are correct, specially in this case. When we did an array with a single waveform, we couldn’t reproduce the problem. We had to remove the DAQmx read from the code to experiment with different waveform shapes, until we found the right combination. Using simple numbers and writing on paper what we were expecting to see made it obvious that we were getting the average of the last value on each waveform and not the average of all the values in the waveforms.
Thanks for reading,
Fab
I’m not sure what feature that NI was enabling when they allowed the transformation of multiple Waveofrms into 1 1-D array, but I would say that this is a good mating image to help give everyone and understanding as to why.
http://screencast.com/t/oo8U5yhbceJ
And unfortunately, this doesn’t transform, which would have made more sense.
http://screencast.com/t/vtpHOGgITT
Completely, totally agree! It should be a broken wire.
Sooooo, I was the one who led Fabiola to understand what was going on. She described the problem in a few sentences and I said words to the effect of, “I bet you have an array of waveforms converting to an array of doubles.”
I also wagered that the behavior was documented somewhere, but not any place that anybody would find it. I was overly optimistic about this; it’s not documented that well anywhere. There is a page about the Dynamic Data Type (affectionately called “The DDT”) that describes this behavior, but it’s pretty buried. There’s also a knowledgebase entry on ni.com.
There were some internal reasons we implemented it this way, but one of the driving use cases was the DDT, which was introduced in LabVIEW 7 Express, along with the concept of Express VIs. This was in the days when LabVIEW’s marketing material mostly consisted of “LabVIEW is easy!” The DDT is the universal, easy-to-use, do-everything data type. It can contain Booleans, scalars, analog waveforms–you name it. It’s Magic! And as Norm points out above, it magically coerces to other things and does something you didn’t even know you needed, without breaking any wires. Internally, the DDT is implemented as–you guessed it–an array of waveforms. So, the waveform data type was “enhanced” to have behavior that suited the magic desired for the DDT.
So, an example of the thinking was that if you acquired some temperature data (stored in a DDT as an array of waveforms), and wired it to an array of scalars, you would want the most recent temperature measurement of each channel. Voila!
My main complaint about the DDT and Express VIs was that they led you down a path where you didn’t need to learn about arrays, clusters, and loops. And then when you ran out of steam with what Express could offer, you had a big step function to learn those fundamental programming concepts–especially if you tried to mix Express and non-Express. “LabVIEW was easy, but now it’s hard.”
But I digress.
These days, a better slogan is “LabVIEW is Amazing!”. fully acknowledging that LabVIEW is powerful, but not always easy.
Digress away Brian,
I completely agree with your comment.
“My main complaint about the DDT and Express VIs was that they led you down a path where you didn’t need to learn about arrays, clusters, and loops.”
It’s not as though arrays, clusters and loops are actually that difficult a concept to get your head around and learning them will help with your future programming efforts. So why short cut this?
I think of expressVIs as a bit of a leg-up if you’re struggling with a blank page, so I’m not 100% against them.
And I’ve lost count the amount of “discussions” I’ve had about the LabVIEW is easy spiel and the damage it does to many an engineering career.
Thanks for posting btw Fab!