Posts from — November 2007
No, I’m not quitting, it’s just time for a quick reflection.Â I’ve done 20 posts in 25 week.Â That’s a bit less than my goal of a post a week, but not bad considering everything going on in my life.
I’m going to be re-doing the website a bit (not a major makeover, but adding some necessary improvements).Â I’m also going to finish some “light” and off-topic posts I have in the pipeline.Â Then I plan to get back to some of the series.
My biggest regret is that the PCB series has stalled.Â It may be a bit more time to get going (since I actually have to learn Eagle PCB – my previous boards used software from a now (deservedly) defunct company), but I will finish it.
I may delete comments which appear to be non-obvious attempts at comment spam or SEO (search engine optimization) – if a comment doesn’t add to the conversation (either on or off-topic) and has a gratuitous link to a website, it may go poof!
November 27, 2007 No Comments
Regression testing tries to verify that software changes do not cause current functionality to fail. A few years ago I wrote software to do regression testing for jobs on a Cognex Insight vision system. Since I do not have a Cognex system at my desk, I cannot give all the juicy details, but I can give an outline.
Machine builders often build a machine to work on a small set of sample parts (say 50). But when it gets into production with many more parts, there are often problems because the production parts show wider variations than the samples (or the manufacturer has made changes).
It’s the same for machine vision – you have to pick one particular part to start designing your machine vision job. Suppose the job works well in production, but somebody wants better results – so you grab a new part and get to work, right? Well, if you’re not careful, you can end up with a vision job that works great for that new part, but not for a typical part, and thus end up worse than you started.
So what I did was save a whole bunch of pictures of good parts and bad parts from production runs, then after I made any changes to the vision job, I ran my part database through the camera, and checked the results.
The Insight cameras use some sort of PowerPC processor, have Ethernet, RS-232 serial, and a bit of digital I/O (e.g. for trigger input). The Insight Explorer user interface software runs on a PC, is written in Java (and works pretty well; note that some newer Cognex products use the .NET framework), and uses a spreadsheet approach to machine vision, which has its pluses (such as simplicity) and minuses (like trying to sequence actions).
I like having the cameras on the network; you can work at your desk with the camera mounted far away. Cognex makes it especially nice by using standard Internet protocols, such as ftp to load and save jobs and pictures (BTW, if you need a free Windows ftp server or client, you should look at FileZilla), and telnet to control the camera.
Using telnet is basically like having a command line interface to the camera. You can do a lot with the camera (load jobs, trigger the camera, insert data, get results, etc), and it’s easy to test out ideas at the telnet command line, then codify them into a program.
I used Python with a free, open source telnet library. At least at the time, there was no free telnet library for .NET, and it didn’t make sense to buy one for this simple application. Then I wrote a Python module to do all the camera control I needed.
To do the regression testing, I wrote a Python program that loaded the desired vision job, then went through the database of pictures, loading each picture, triggering the camera (so it would run the job on the loaded picture), recording the result, comparing the result to the desired result, and then scoring the overall results.
Theoretically, it would be possible to do the same thing using the Insight Explorer software without using a real camera. In that case, you would use a GUI functional testing library (and some good free ones exist for Python) to load the jobs and pictures, then check the results. However, since Insight Explorer is written in Java, the normal GUI testing tools did not work (Java visual widgets aren’t the same as the native Windows ones – OK, maybe if you’re using SWT, but I think they were using Swing). I had limited success automatically inserting keystrokes and waiting, but it wasn’t reliable.
November 16, 2007 5 Comments
This is an extended response to a question asked by Gary Mitchel on his Feed Forward blog: Where do you go for depth? I have so many thoughts on this subject, and the broader topic of good information sources, that it will take several posts.
First off, I am a book junkie. I might be in hibernation now (after marriage and children), but if I go to a book store, especially a used one, it’s very likely I’ll come out with 2-3 books. I don’t always read the books right away, but I always buy because I’m interested in what’s inside, not just the covers.
The best places to go for information depends on what you’re looking for. It’s different if you want to think about perennial human problems (philosophy, religion, political philosophy), current events (history, politics, social trends, etc), technology, or other areas that I won’t cover.
Each type of media has its own strengths and weaknesses. Since I’m not writing a book here, I’ll just give some opinions of mine.
Paper has the best readability – it’s so easily get comfortable with a book or magazine. And if you need to refer to multiple sources, it’s so easy to spread a whole bunch of paper, books, or magazines out – OTOH, most of us can’t afford the equivalent number of monitors! Reading a PDF file or HTML file on a monitor just isn’t the same.
When I was first programming, I did a lot of print outs. Now I hardly ever print anything out. Partly the programs are longer – long program listings aren’t so useful. Mainly I think it’s because modern development tools are so useful – like searching in all the files, showing the difference between two versions, going to function definitions with a click, etc. Larger monitors also help (if I were back on a 640×480 screen, I’d be printing code a lot more).
One problem with print is it quickly runs into space limitations. Complete documentation of any reasonably complex system (say the .NET framework) will take thousands of pages. Magazines have severe space limits. Space is much cheaper online, so it’s possible to have much more information on line.
But it’s harder to read long pieces on line. The blog format especially isn’t well suited to long pieces – they need to be broken up. For example, I’m breaking up this top into at least three posts, so that each post will be somewhat manageable in length. There is a plus in splitting posts up – it means I can write it a bit at a time, which is easier to do. This is a return to the earlier days of publishing, the time of Dickens, when novels were published serially (often as the author was still working at it!) – we’re also seeing this in computer books (e.g. Manning’s Early Access program).
Another online (and computer) advantage is search. It can be so much faster finding something online than using trying to sort through the indexes of a bunch of books – and you still might not find it. Of course, now some sites let you search books online (Amazon, O’Reilly’s Safari, etc) so this can be the best of both worlds – use search to finds the books, then read them in paper form.
A web disadvantage is lack of permanence – links grow old, web sites disappear, and not everything is cached. Paper is still readable hundreds of years later (well, if it’s quality acid-free, not pulp).
As I’ve mentioned, I’m a book person. I don’t care much for audio (except for music) or video (OK, I do love Looney Tunes), especially for technical topics. Print material lets me go at my own speed, skip the stuff I know, and concentrate on what is most important. I’m sure audio (including podcasts) and video have their uses.
My recommended reading for this post is Evelyn Waugh’s Scoop and Dorothy Sayer’s Murder Must Advertise. Scoop is totally hilarious and still relevant to today’s journalism (especially considering the number of journalists who have been caught making up stories). Murder Must Advertise is also an enjoyable read, especially the parts about the advertising agency (the copywriters don’t like the art people – kind of like Gary Mitchell and his art director – but they all hate the customer).
Comments 4/19/2011: I haven’t finished this series yet.Â I still hope to, once I complete about 100 other tasks first.Â I still don’t have an eBook reader or tablet; I’m likely to go for PDF or print for technical books, and stay with print for non-technical.Â You loose a lot of rights (such as re-sale and easy lending) when you start using Kindle, Apple ebooks, etc.
November 15, 2007 No Comments
What am I talking about? Using a portable 2.5″ SATA drive to store and run virtual machine images. I use the free VMWare Server, and install it on the main machines I use. I keep virtual machine images on my portable drive, and run them from the drive, instead of copying (which takes a long time).
The portable drive lets me take my PC environment with me – for example, it allows me to work at home, and conveniently keep my work life separate from my personal life.
For automation projects, of course, PC virtual machines have their limitations – they don’t simulate robots, PLC’s, or a lot of other hardware. But they can still be very useful. For example, I have a project with two similar, but different (for different hardware) sets of COM objects. They can’t both be installed at the same time (they don’t meet the requirements for side by side or registration free installation), but I can’t compile in Visual Studio unless they are registered. But I can have two different virtual machines, each one with the appropriate set of objects registered.
What I did was combine a Hitachi 7K200 7200 RPM 16M byte buffer SATA 2.5″ hard drive – currently the fastest laptop drive (see StorageReview; I bought the 100G model from ZipZoomFly) – with an eSATA/USB case (I bought a Coolmax from Fry’s for $10 after MIR; other choices include Vantec). The USB connection provides power without wall warts and the ability to connect on most PC’s, the eSATA gives high speed (about 2-3x faster than my Acomdata 80G 2.5″ drive).
I’ve found that 3.5″ external HDD’s are fine for backup, but simply too big and clumsy to move around frequently (especially with the typical external power supply and cables). Microdrives are too slow for running a VM (I’ve tried on a 8G Memorex USB drive). 1.8″ drives are interesting and would probably work OK for VM’s, but are pricer and slower than my approach. Flash memory is great for transferring data, but I don’t trust it for running VM images, because of its write cycle limit (and flash isn’t so great at small, random writes).
Standard 2.5″ portable USB drives work OK for running VM images (I’ve used the Acomdata quite a bit), but the eSATA approach gives you roughly desktop HDD speed for a bit more money. I haven’t seen any commercial eSATA 2.5″ portable drives; for that matter, I don’t know of any commercial 7200 RPM 2.5″ USB drives, so right now you have to build your own – but it’s extremely easy.
- On Windows, I highly recommend formatting the drive using NTFS; otherwise you’ll have to have VMWare split the drive image to deal with FAT’s 2G maximum file size. All NTFS drives (USB, Firewire, eSATA) have to be stopped (e.g. via the Safely Remove Hardware icon) before they can be safely removed.
- Virtual machines love memory – on the host PC, 1G is about the minimum, 2G is much better.
- The Coolmax case quality isn’t as good as my commercial Acomdata; the drive can wiggle around. I also like Acomdata’s brushed aluminum finish better.
- 7200RPM 2.5″ drives probably won’t work with just 1 USB port for power. A USB port can provide up to 2.5W (500 mA at 5V); the 7200RPM drives typically specify at least that much power to operate. Coolmax provided a single USB port to power plug cable and a dual USB (one power, one power & communications) to 5-pin cable. I plug in the eSATA cable first, and then the USB cable so the drive always starts up in eSATA mode.
- The Hitachi drive runs a little warm, but not hot. The Acomdata drive doesn’t even get warm.
- The eSATA removal procedure is a little more involved than on USB drives. One approach is to use the hardware device manager (e.g. provide convenient link to devmgmt.msc). Sometimes SATA drives (including internal ones) show up on the Windows “Safely Remove Hardware” icon. If they don’t, sometimes the Hotswap! utility can help (it does not support all SATA controllers).
- Frequently adding/removing eSATA drives may cause problems with software activation schemes. I suspect it may be more of a problem when Windows thinks it’s a permanent (instead of removable) drive.
- The eSATA/SATA standards have their quirks. Many early add-on cards do not support hot swap; probably most early motherboards do not either. SATA is point to point, so if your motherboard comes with only 2 SATA connectors, and you’ve already got two HDD’s, you’re going to have to add an eSATA card. SATA and eSATA connectors are slightly different. The eSATA cable is better than IDE, but not as flexible (or long) as USB, although I found a 6 ft model which seems more flexible. eSATA cables seem to be $10 or more.
- Since one PC needed more SATA ports, I bought a Vantec UGT-ST300 eSATA PCI card for $30. It was the cheapest card the got good reviews and guaranteed being able to hot swap. I haven’t had any problems with it yet.
- On the other computer, I have a bracket that converts a SATA cable (from the motherboard) to an eSATA connector.
- Since eSATA is not common, it’s best if you only use a few computers regularly with the portable HDD. And it’s convenient to pre-stage the cables – have a eSATA cable and USB power cable already plugged in to the computers you use the most (and carry an extra dual USB cable with the drive for other computers).
Note 4/19/2011: right now I’m basically running everything on my laptop.Â If I wanted killer portable storage now, I’d look at combining a superfast SSD in a 1.8″ case with a USB 3.0 interface (if such a thing exists).Â Maybe a USB 3.0 memory stick would work well, too.Â But don’t forget to back up — flash mass storage can fail catastrophically too.
November 15, 2007 No Comments