Merging modern software development with electrons and metal
Random header image... Refresh for more!

Parallel Programming In Automation: Hype and Reality

I’m thinking about re-writing an application, and have been looking at adding in a little bit of parallelism.  I suspect it’s like many automation applications: there isn’t any potential for massive speed increases from using a multi-core processor, since most of the time is spent loading/unloading the part and in sequenced motion.  The data handling is so quick on a modern CPU that there’s no point trying to speed it up; instead, I’m looking at doing, say, network access in parallel with motion.

There are a wide variety of approaches to parallel programming, including:

  • Multiple processes, which is very heavyweight.
  • Traditional threading.  Most programmers find it very hard to write bug-free multi-threaded code.
  • Asynchronous calls, which has limited scalability but can still add considerable complexity.
  • Actor model, used in Erlang and Scala.
  • Software Transactional Memory, used by Clojure and Haskell.
  • Fork/Join
  • Agents, used by Clojure.
  • Dataflow variables, used by Oz programming language.
  • Dataflow programming, used by LabView.
  • Microsoft’s Task Parallel Library supports various techniques including parallel For/Foreach loops, parallel invoke, parallel LINQ, and actors.
  • And I’m sure there are many more…

Andy Glover and Alex Miller discuss many of these approaches during an information-packed IBM developerWorks podcast.  I’m certainly no expert, but I strongly believe that there won’t be one dominant approach to parallel programming, and I don’t think parallel programming will ever be easy.  Just creating a good (meaning maintainable, extensible, testable, and reliable)  single threaded program isn’t easy; adding parallelism adds another layer of complexity.  A naive parallel program can actually take longer to run than a single threaded program.

There are also a variety of goals: do you want parallelism to speed up massive calculations (such as simulations), to scale to a massive number of users (such as web programs), or for extremely high reliability (such as telecom switches)?  I highly doubt these different divergent goals will have the same solution; for example, GPUs can be great for speeding up simulations, but won’t help with telecom reliability.

So that’s why I get skeptical when companies promote their approach as “painless parallel programming” with wonderful speedup.  Sure, you might get that promised speedup by replacing a 2-core CPU with a 12-core CPU, but only if your problem and your approach to that problem is well suited for that tool’s approach.

For my problem, I have various constraints (such as .NET is highly preferred, I need others to be able to maintain the code (so no F#), and I value simplicity over performance).  I’m looking at either traditional threading, using the Microsoft TPL, or an Actor/message passing approach.

As a side note, theoretically PLCs should easily handle parallel programming, since they’re based on combinatorial logic.  Just create a PLC to FPGA compiler that translates the entire PLC program to gates in the FPGA, and run your PLC program simultaneously, without a scan sequence, at MHz clock rates!  The problem, of course, is that most PLC programs rely on the order within the PLC scan sequence, and many advanced PLC functions don’t easily translate to FPGA logic.


There are no comments yet...

Kick things off by filling out the form below.

Leave a Comment