One of my first electrical engineering professors would start his first lecture of the semester by declaring engineers are in the business of predicting the future.

It took a few years for this to sink in, but he was absolutely correct. As engineers, we make a living by making precise predictions about how a yet-to-be-built system will behave. What we really are doing is following the scientific method.

Although the hardware and software teams at Delacor often work on separate projects, we are small enough to always be up to date on what the other folks are working on. We have slowly realized a lot of the things we do in the software and hardware world are very similar. Many of the software best practices we are passionate about have a direct or equivalent concept in hardware development. For example, we often talk about Test Driven Development in software. In hardware, we don’t have a fancy term for it, but we apply what is essentially the same concepts to the development process.

In both software and hardware, the cheapest and most efficient way to go about development is to start with a clear and measurable idea of what the end result of the project should be. Of course, this is easier said than done, because ‘a clear and measurable idea’ means you must have a complete and absolute understanding of whatever you are designing (and therefore its bugs!).  Sounds hard, but we have to start somewhere…

On a hardware project, we usually start with an exercise to determine the minimum acceptable performance of the device we will design. We can save a lot of cost and effort if we have an agreed-upon list of features or specs before we start designing anything. This way, any time there is a question on whether we should spend more money for ‘better’ components, or spend more effort adding some additional functionality “for just a little bit more cost”, we can quickly refer back to our minimum acceptable performance and make a decision on whether or not to proceed.  The table below shows an example of such a document, where each important spec has a minimum requirement, but some specs have ‘nice to have’ requirements, where it may be worth investing a little more if we can get superior performance. Finally, it is also important to have ‘Will not do’ specs. Having a discussion up front about what features we will definitely not do saves countless hours of discussion and prevents the development team from investing in things we have already agreed not to do.

Some of you may be thinking “yeah, right…what customer is going to give you all your requirements up front in a nice table like this?” ….and you would be correct…in all of our history of engineering design, NO customer has ever handed us a table like the one below and said: “this is what I want, here is a check.”. The reality is that we ask a bunch of questions, do a lot of listening, and then we come up with the table and get our customers to agree to it.

Acceptable Specs Table

Acceptable Specs Table. Most likely YOU will come up with this and get your customer to agree to it!

We do something very similar for software projects. In the case of proof of concepts or prototypes, we ask the customer what is their ‘demo script’. In other words, what functionality would they have to show an investor (or their board of directors, etc.) in order for the project to keep going?  We also ask other questions to determine what is the most critical feature in the application, or whether they plan to run from executable or source code, what parameters they specifically want to change, or what features of their device under test they would like to cover. In summary, we want to have as clear an idea as possible of how the end product is going to be used and what is the minimum set of features we have to complete in order for the project to be considered complete.

Again, this is easier said than done, since customers often do not really know what they need. In these cases, we can work with our customers to still define a minimum set of requirements, with the understanding that some of them may be ‘padded’ due to some uncertainty. The key point is that we always start with a clear idea of what we are going to do and we work with customers to define what they need (or what they think they need) before we start addressing what they want.

Doing otherwise is a recipe for frustration and expensive mistakes. For example, we once had a customer who had a photonics sensor board with a DAC for driving light sources. They only knew they needed a “12 bit DAC”, so they were using a very expensive 12-bit DACs in the market. Once we started working with them and analyzed their entire system, we were able to prove that other circuits in the system dominated the output error, and using a DAC that cost 30% of the original part would cause no additional error on the output of the system.

Similarly, we had a software customer who wanted a data collection application where all data was immediately saved to disk “with no buffering”. The application worked fine in systems with SSD storage, but older systems with traditional spinning hard drives would be too slow and the operating system would use its own buffering, causing the application to continue writing data for up to 2 hours after ending a data collection session!

So, in both hardware and software, it pays to thoroughly understand the implications of each design decision and benchmarking critical tasks in the target system (or prototyping small sections of the hardware system) saves time and money in the long run.

Design Before you Build

It’s tempting to go straight into drawing a schematic or wiring a LabVIEW block diagram, but in order to build the most efficient and reliable design you have to know what its expected behavior will be, and then match it up against your minimum requirements list in order to verify that the system will do everything it needs to do. This is the only way to guarantee, that the design will work, whether you build one or ten thousand of them.

Sample Mathcad document

We start our hardware designs with something like this…not a schematic!

Does it do what I thought it would do?

We always tell our customers that designing a test plan is something we do during the development stage, not after the design is complete. The fancy term for this is ‘Design for Test’.  This results in products that are easy to test, since test points and any required ‘hooks’ become part of the design, rather than warts to be added later.

We want this…

…not this!

In addition, we want our test plan to include clear descriptions of each test we will perform, including how to perform the test and what results we should expect to see on a typical working unit. Note that this expected behavior is different from what we may give customers as specified behavior, since this will include additional tolerance and guard bands to account for process variation.

In the software world, most of this translates into designing an effective set of unit tests before even writing the main application code. At Delacor we use several levels of unit testing, including manual DQMH API testers and integration tests, and fully automated unit tests.

The idea behind the test plan is not to answer the question “does this work?” The question we really want to address is “Does this device behave in the way I predicted it would behave?” If it doesn’t, our goal is not just to “fix it”, but to understand why it behaves the way it does. This is the only way to guarantee that whatever change we implement to address the problem will truly fix the issue.

A small sample from one of our test plans…

In addition to these engineering best practices, we have learned a few other simple but useful tips over the last 18 years of playing around with hardware and software…

Quick Prototypes

It is always tempting to wait until we can test complete systems, but it is much more efficient to test and prototype small sections of a design so we can concentrate on understanding everything about it before integrating it into the larger system.

When designing hardware, we can spin simple little prototype boards in a few days for a few hundred dollars.  This has saved us thousands of dollars on many occasions. One caveat here is that we want to be careful to not try to characterize the prototype. In other words, we don’t claim that every other circuit like it will behave the same way because of what we measured. Rather, we will prod and poke at it until we understand how it works and our mathematical models match the observed behavior. At this point, we can definitely say that as long as all the parts are put on the board correctly, every single one we build will behave according to our model.

A cool prototype board to break up all this text!

Similarly, when writing software, you can also prototype sub-sections of your application in isolation. You can invest several hours into having a functional (but not necessarily pretty) API tester that allows you to prod and poke at your module until you fully understand all of its behavior. Just like with hardware, you can invest hours or days now to save weeks later.

Blinky Lights!

Sounds obvious, but one of the simplest, cheapest and most effective debugging tool for deployed systems is just some blinky lights! When you’re trying to troubleshoot a problem remotely, or when your customer is trying to fix a problem themselves, having a green light to indicate the system is working can save lots of debugging time.

This works on software, too!  We always include a blinking status ‘light’ on the front panel of our applications. As long as the light is blinking, you know the software is initialized correctly and is not frozen.

If these lights are a-blinkin’, the application is a-running!

Don’t Plan on Shipping Rev A

We have seen some large companies encourage their employees to ‘ship rev A’ of whatever they are making.  Another name for this is “Get it right the first time”, etc. This may work great for folks whose jobs involve doing the same thing every day, but from an engineering point of view, this is the wrong mindset! If you are designing your system with the goal of shipping Rev A, you will very likely be very limited in your ability to innovate, try new concepts and make a better product. If you start your project knowing there will be two or three revisions, and allow time and budget for this, you can take some initial risks and try things you have never done before.

Of course, another downside of striving to ship a ‘Rev A’ is that it takes way too long to get the hardware back, since every little detail must be checked and double-checked …only to find out you missed one out of a thousand little things, and you must spin another version anyway!

In addition, if you know Rev A is not going to be the shipping product, you can take some shortcuts (like being less rigorous in your analysis, or making other tradeoffs that allow you to get your boards back faster), as long as you absolutely make sure to go back to your next revision and apply all of the engineering best practices required for a quality product.

Again, it’s not much different with software. You can start with a proof of concept (POC), and make sure all of your cool and innovative ideas work the way you expect. Just don’t try to duct-tape your POC into the final application. Go back and clean things up, applying software engineering best practices as you incorporate the concepts proven with your POC into the main application.

A pretty board we did a while back…shipped as Rev C!

Reference Designs

Finally, we have learned to use reference designs to save days and weeks of work. Once we have a proven and verified circuit, we will use it again and again on other projects with similar requirements. With software, we create reusable code and save it to its own repository to be used again as required.

In addition to the DQMH, we have quite a few internal templates and reference designs. Yeah, we’re happy to sell a license to these 😉

One caveat here is to always be wary of vendor-provided reference designs. For both software and hardware, the goal of vendor reference designs is to show their products in the best possible light, and they may not always be the best solution for real-life problems. It is still useful to review vendor reference designs and learn any ‘quirks’ or tricks that may not be fully documented elsewhere, but do your own due diligence rather than just assuming “Big bad company X came up with it, so it must work just fine!”

We are not that different after all!

As we’ve seen in this long article, software, and hardware design are not that different from each other.  In the end, we are all doing engineering, and the same principles apply. The nifty table below shows some of the hardware and software tasks we typically do during a project, and how they relate to each other.

The only difference between software and hardware is that with hardware we only get to press the Run arrow once every month, and each time it costs around $5,000 😉