Author: markboyd
The end of the Pelco era
In 2013/2014 I was part of a research effort at Pelco that identified the drivers of new surveillance technology. We wrote two different white papers during that time, with me as the author and editor.
The research lead, Sam Grigorian, demonstrated the importance of asynchronous cores in video processing. Co-researcher Fida Almasri identified the necessity of accelerating Pelco’s production cycle in order to stay competitive, and I argued that cell phone processors could allow us to leapfrog new surveillance systems and be our new platform for advanced surveillance hardware.
Our white papers explained that we are at a very interesting and challenging place in Moore’s Law.
In tech forums I often see people write about the end of Moore’s Law as if it meant the end of innovation. I often read people who claim that the limit of miniaturization is coming “soon”.
The truth is that Moore’s law is slowing down. Processing cores have just barely made it to the 7 nm node of semiconductor device fabrication. 5 nm semiconductors have been constructed and demonstrated in the lab. We may see these in commercial production around 2020.
The Lawrence Berkeley National Lab demonstrated last year that it is possible to create a transistor with a working 1-nanometer gate.
If researchers solve the problems of quantum tunneling and of removing heat from processor cores then we could see miniaturization of up to 3 to 1 nm before we hit that final wall of physics in this technology.
And then we will need to advance in other technological directions to allow for the increase of switching devices per volume.
However, the end of shrinking transistors doesn’t mean the end of innovation.
Quite the opposite. As we pointed out in our white paper, technological innovation is accelerating. It is being driven primarily by cheap and easy to use processing cores. The rate of innovative change has accelerated to a point where is is difficult or impossible to predict new technology further than 18-months out. And the rate of technological innovation seems to be increasing.
It has become much easier to solve advanced technological problems by throwing a handful of transistor dense processor cores at it. But this leads to a different problem. Multiple processors work best with parallel processing. And our technology is still in the infancy of parallel programming.
Development environments for parallel programming are sorely lacking. Parallel programming environments are not easy or intuitive for human programmers. We are still coming to terms with the problems identified by Amdahl’s law – a formula used to describe parallel computing workload and used to indicate the hard limits on how many parallel processors may be used to solve a given processing task.
Even the latest and best Integrated Development Environments designed for parallel design require software “tweaking” and other kludges by the software engineer. As the development environment matures we should see even more gains in innovation.
Back to Pelco
These were the challenges for Pelco in 2013. Accelerated innovation necessitated a decreased production cycle. Advanced surveillance systems required embracing asynchronous processing cores. And video over IP was transitioning to advanced networks using better hardware and compression methods.
Pelco once dominated the security marketplace using advanced technology in a market that was traditionally low-tech. So it is somewhat ironic that Pelco couldn’t or wouldn’t keep up with technological advancement in the security industry.
Faced with the urgency to dramatically upgrade their technology and production processes to keep up in this accelerating environment of innovation – they blinked.
And Pelco’s parent company at that point decided to let Pelco’s era come to an end.
At one time Pelco employed over 2,000 people. There are under 1,000 people at Pelco now. Another 200 will be laid off from Pelco during 2017. And I’ve been told that the Clovis manufacturing facilities will be closed at the end of 2017. Any employees not working for Schneider Electric – Pelco’s owner – face an uncertain future. There are still a few product lines that are or will be transferred to other facilities.
The white papers that I helped create may have been one of the last straws in this decision.
Makers, you can develop an embedded system like an engineer.
Semiconductor manufacturers will often showcase their microcontrollers in an engineering evaluation board. These boards are loaded with features and peripherals that are useful in various ways, and because of this they are often relatively expensive. They are usually out of the reach of the Maker community, and even some of the smaller design houses may avoid them due to cost.
But the Maker Community has become home to several low cost microcontroller boards that could be used in an engineering development environment. At the very least, it will allow hardware engineers, or even hardware technicians, to set up a low cost minimum platform that can be turned over to microchip firmware programmers for initial development and testing.
Even better, these boards can be coupled with low cost engineering-level hardware and software to provide a learning platform for those Makers who are interested in breaking into embedded systems programming.
The main two boards used by the Maker community are the Arduino and the Arduino-compatible Microchip / Digilent chipKIT. These two boards are designed to be easy to program using the standard Arduino Integrated Development Environment (IDE). But there is nothing that requires the Arduino IDE. Instead, relatively inexpensive engineering-level microcontroller design tools can be applied to these Maker boards.
Almost 15 years ago I designed a product around the Hitachi SuperH SH-2 microcontroller. It was a wonderful device and very powerful for its time. It was the engine that drove the Sega Saturn game console. These microcontrollers required the use of an In-Circuit Emulator (ICE) and a bond-out processor that cost tens of thousands of dollars. The ICE emulated every part of the microcontroller, and was popped into the prototype PCB for the software developer. The ICE allowed the programmer to see quite a lot of what was happening inside the SuperH microcontroller – including individual registers in near real-time. Once the firmware was developed and troubleshot, it was packaged and sent to a parts distributor, who programmed the SH-2s for us before our production line added them to the circuit board. Firmware changes were made with the ICE, and added to our Production pipeline through change orders.
Today, in-circuit emulation has been mostly replaced by on-chip debugging hardware. The Joint Test Action Group industry association implemented standards for on-chip testing using a dedicated debug port on microcontrollers. These debug test access ports are used with a protocol that gives the engineer access to the device logic levels and registers.
Different manufacturers implement JTAG somewhat differently, and many manufacturers (Microchip, for example) have their own device serial debugging ports. This has led to the use of JTAG programmers for programming the printed circuit board in an Automated Test Equipment environment, and to manufacturer specific In-Circuit Debuggers (ICD) used for developing an embedded system.
ICE is expensive, and with newer microcontrollers it is unnecessary. JTAG programmers are useful, but not as useful as an ICD. And In-Circuit Debuggers have become cheap enough for the Maker Community to purchase.
ATMEL makes the AVR microcontroller that is used on the Arduino. And while a Maker could use the Arduino IDE to create something new, there is really nothing stopping anyone from using the ATMEL engineering design system. The ATMEL IDE, Atmel Studio 7, can be downloaded for free. ATMEL recommends several different ICDs, from low-cost to professional level. The mid-range and low cost ICDs are under $60. The professional level ATMEL development ICD is about $660. All are useful for product development. The difference between the debuggers is mostly in the speed at which the ICD can communicate with the PC, which affects programming speed and the number and speed at which registers are displayed in the IDE.
Microchip (who recently purchased ATMEL) also has an IDE that you can download, called MPLAB. The most current version as of this writing is MPLAB X, and there is a version of MPLAB that can be used on the Cloud. Microchip has a lot of 3rd party ICDs and even more simple serial device programmers. It is quite possible to purchase an ICD for under $40 for the Microchip PIC microcontrollers. The Microchip “official” ICD is the ICD 3, a USB-connected ICD that is widely used for developing embedded systems. You can purchase the ICD 3 for about $280, including shipping, from Microchip Direct.
So a member of the Maker community could purchase an Arduino Uno, download the Arduino Software IDE, and start programming almost immediately.
OR, a Maker could get an Arduino Uno, download Atmel Studio 7, check out the ATMEGA 328P datasheet (PDF), purchase one of the recommended ICDs, and then learn how to interface it with the Arduino’s microcontroller. A Maker could do the same thing with Microchip’s chipKIT using MPLAB and a PIC compatible ICD – or the Microchip ICD-3.
No, this won’t be as simple as the Arduino environment normally used by Makers, but this is a good start for learning how to really design an embedded system. This is an excellent opportunity for anyone interested in learning the basics of embedded systems design and programming.
And finally, Arduino and chipKIT are not the only players on the market. There are others that are compatible, or not compatible with the Arduino IDE. And each manufacturer has either its own IDE, or is compatible with an open source IDE.
Smart Aquaponics – defining the new system
So in my last post, I started describing the basics of controlling the flow of water in a complex aquaponics system. Here are some other constraints that I will be adding to my design.
- The design must get the best “bang” for the buck.
For the most part, I’m going to try to use inexpensive materials that are easy for me to acquire locally. For example, I don’t mind putting extra effort into cleaning up used IBCs if doing so saves a lot of money. As always, I’ll try to gauge my time against the cost of the item.
- The design must be power efficient
To accomplish this, I’ll be using a very efficient water pump that is always on. I am planning on using only one, high rate of flow pump. While it is possible to use several smaller pumps to more easily move the water around the system, this isn’t very efficient, and could lead to multiple points of failure. It is also possible to cycle the power on a pump to increase power efficiency, but in doing so the lifespan of the pump suffers. More reliable pumps and “soft start” pump controllers are more expensive. There may come a time when I rethink cycling the power on pumps, but first I’ll design a system that doesn’t require this.
- Shorter amounts of PVC pipe
The water will come into my system through city water, and it will either leave my aquaponics system through the city sewer, or be diverted to my lawn, flower beds and garden. I would prefer any water leaving the system go to my lawn or garden, but there may be times that I will want the used water to go directly to the city sewage treatment. I see this as several different PVC systems that include, 1) water delivery from the city to the system. 2) Water flushed from the system to the city sewer or to storage for lawn / garden. 3) Water delivered from the sump to the grow beds and to the fish tanks. 4) Water returning from the grow beds and fish tank to the sump.
I may make use of bridge siphons to connect parts of the system, and I will use flow restrictors for things like hanging towers of strawberries.
- Parts are easy to repair or replace
This means that anyplace I add a valve, pump, or other PVC connection, it should be installed in a manner that is easy to disconnect and replace with a new part. Valves should be easily repaired with standard tools.
- System must “fail safe”
If power goes out or the pump fails then the system should not flood at any point. Grow beds should drain completely, and fish tanks should stay full. If any valve fails in the open, or closed, position then the system must alarm, and the system must not lose water. There should be hooks built in for monitoring water levels and flow rate.
- System is easy to expand
If I want to add grow beds, fish tanks or sump tanks, it should only be a matter of space and a little extra plumbing.
- System is easy to clean
Solid waste should usually fall to an easily collected place. A standard pond vacuum cleaner should be able to reach all parts of the fish tanks. Grow beds should be easy to clean. This isn’t about bio filtration of waste – not yet.
Constraints are key to system design
I’ve read that any problem is easy to solve if there are no constraints to worry about. Sending a man to the Moon is relatively cheap and easy if you’re not worried about food, water, air, or an eventual return to Earth. It is constraints like this that define the challenge. And stating them clearly is a good way to start.
What must be understood is that at the start of a project, we really do not know what all the constraints might be. So I’ll be prepared to update this list.
Smart Aquaponics Control System – An Internet of Things project
In my last post, I’ve said that I want a project that helps me control an aquaponics system. These are both very vague goals, and I need to narrow them down, and then I need to figure out how to do them. It seems a bit silly to use Project Management Processes and Product-Oriented Processes to define and control my project when I am simultaneously the management, employee, and customer. However, it is still useful for me to define the project in these terms in order to create a timeline, benchmarks, and goal.
To start this I’ll first define what sort of aquaponics system I want to control, and speak about the constraints I must follow. As an aside, I’ve heard it said in many different contexts that any problem is easily solved if you do not have to worry about constraints. Even sending humans to Mars becomes much easier if you don’t have to worry about them arriving alive – or ever departing again! A round trip to Mars with living humans is considered a “constraint”.
What is an aquaponics system?
Aquaculture is the raising of fish (or other aquatic animals) for food. Such a system requires food and clean water, and the wastes from the system must be filtered out of it and disposed of, or else the system will become deadly to its animals. Hydroponics is the growing of plants in water for food. This sort of system requires that the water be fertilized and filtered, and watched for build-ups of waste and metals, or else the plants will die.
Aquaponics is a combination of these two methods of growing food, such that the wastes of the aquatic animals are used as fertilizer for the plants, and the plants clean the water for the animals. There is also a layer of nitrification bacteria in the system that break down ammonia and nitrates into nutrients for the plants.
In a perfect aquaponics system, you put fish food into the system, and you get meat and plants out of the system. Having said that, no aquaponics system is perfect, and a certain amount of tweaking the system is necessary.
To the right is a good example of a very simple aquaponics system. This system uses a pump to move water from the fish tank to a filter tank. The filter tank is filled with media that houses nitrification bacteria. This describes nothing more than a standard aquarium, and such a setup can be very stable, needing little in the way of maintenance if the ratio of fish to bacteria stays stable, and if distilled water is added to the system to compensate for evaporation losses.
The addition of plants to such a system runs into an immediate problem. Some plants cannot survive with having their roots constantly submerged in water. They need to be able to have access to air. One way of doing this is to periodically drain the upper media tank several times a day. A good mechanical way of doing this is to use a self-starting siphon called a “Bell Siphon“.
In short, a Bell Siphon will start to siphon water from the upper tank when the water reaches a certain depth. It will stop pulling water out of the upper tank when the water level drops below the rim of the Bell dome, which is typically near the bottom of the container. If the container is sloped toward the siphon, then most of the container will be completely drained.
This simple siphon works fine for a small aquaponics system. Something like the one shown to the right. This is a 150 gallon tank with an 80 gallon media bed. At the time of this photo, I was experimenting with Bell siphons, and had not added media to this project.
But what happens when you have multiple fish tanks? What happens when you have multiple media beds for your plants? What do you do when you want to add other aquatic animals besides just fish? Like freshwater prawns or mussels?
As I continued to play with aquaponics systems, the next thing I knew I had over a thousand gallons of fish tanks, and I was trying to find ways to circulate their water, and also to be able to drain them when I needed to clean them. This is what I came up with.
This system doesn’t even count the plant grow beds. It also uses a common sump tank as a shortcut. It did have the advantage of allowing me to drain a tank if I needed to do so, but with the disadvantage of requiring me to re-route the circulation if I wanted to cut off a center tank.
I actually built this system, and documented it on my permaculture website, with a video of it in action. You can see this here.
And I ran into a problem. Okay, one of the problems I had is that I had to move, and was required to disassemble the system.
But what I consider to be the bigger problem is how the system worked together. If I added grow beds to the system, I had to be concerned with my sump “bottoming out”.
The grow beds that I want to use are two feet wide, four feet long, and 14 inches deep, with 12 inches of media. This translates into 60 gallons of media and water together, or about 25 gallons of water per grow bed. This means I can have no more than 7 grow beds before my sump goes bone dry. Or less than 6 grow beds if I want to keep my sump pump underwater.
This also doesn’t account for water diverted to strawberry towers, lost in evaporation, or used in other ways. I would like the ability to use 10 grow beds or more, and use the sump itself (with a screen around the pump) for growing catfish. So a certain amount of water must remain in it.
This is what I came up with… multiple tanks, multiple grow beds. Of course, the sump would have to be larger, just in case all the Bell siphons achieved synchronicity and allowed all the grow beds to fill up at once. There would have to be enough water for everything.
Each of those red handles represents a valve that can be turned off or on by hand, and that I would be expected to fiddle with to get the right rate of flow for the system. And while this is quite acceptable with a smaller system, it becomes a problem larger systems.
Luckily valves and pumps can be controlled through electronics. I wanted an inexpensive method that I could create, and that I could offer to the Maker and Aquaponics community. I’ll talk about that next as I more formally describe the scope of this project.
The Internet of Things – a new project
It’s been a week since I attended the UBM West conference in Anaheim, and I’ve acquired a new fascination with the Internet of Things. In fact, I’ve been more than a little fixated on it since coming home.
I have been having discussions with various friends in engineering about the IoT, and I can see that there is a lot of potential here, but also there is a lot of confusion, and companies working at cross purposes. The consumer marketplace is quickly adopting many different IoT ideas, while manufacturing seems to be talking a great game, but is adopting the Industrial Internet of Things (IIoT) slowly – if at all.
One of my software friends is pushing the Internet of Data – the idea that the device doesn’t matter, only the data. As part of the setup, the device explains to the network what it is, what it can do, and what data it can provide. It is up to the network to determine how to use this information.
Unfortunately, there is not industry standard for data sharing in the IoT. There is no real standardized Human/Machine Interface (HMI) to allow a human to interact with a local network of IoT to do useful things. The IFTTT web service is a useful idea of how such an HMI might work, but I’m having a hard time visualizing how it would work in a local network setting – where I could control the devices in my home through my tablet computer, or where a manufacturer could use it to choreograph several manufacturing robots to work together.
I’ve learned that there is a lot of pre-existing infrastructure and technologies for IoT, and there are several services that hook into it. The problem is getting them all to work smoothly together.
But reading about it all is completely different from actually doing it. And to learn, it is often best to create a project to accomplish something, and then learn from that.
So I’m starting a project.
If you know anything about me, you may know that I’m interested in permaculture. Permaculture is the creation of sustainable agricultural systems by studying and simulating the ecological systems found in nature. It is the idea of working with nature, fitting our society together with ecology. There are principles of design that guide permaculture, and they are useful – but they all boil down to a simple idea, human observation and reaction as a form of feedback is key to an agricultural system.
I’ve been writing about urban farming and aquaponics in my blog, “Fresno Backyard Harvest” for a few years now. Aquaponics has been of great interest to me as part of urban farming and permaculture. And I would like to better automate my urban farm, with controls that monitor plant moisture, measure water quality for my fish, and adjust water levels when necessary. I’d also like alarms when something goes wrong, and I would like to see how much of my urban farm I can measure.
For my first project, I intend to create something that will control a small aquaponics setup. I will use it to monitor various sensors, and report back to me. It should also control water flow.
Right now, I’m in the planning stage. I’ll keep a running journal of how I decide what to use, and what my designs look like. I’ll also keep a Github repository of each project.
You’ll be able to find my progress in the “Projects” category of my blog. Starting with this one. As my projects split off, they’ll be given their own categories.
Industrial Internet of Things, Disruptive Manufacturing at UBM West Exibition & Conference
I’ve enjoyed engineering journals like EDN, EE Times and embedded.com for years. So when I found out that the UBM “mothership” was hosting a conference in Anaheim, I was eager to attend.
I haven’t always had the freedom to attend any conference or exposition that I wanted. Employers are understandably reluctant to send engineers to expensive conferences. But these events “feed” the engineer with new ideas. They let us know what is possible. And of course they allow us to network and connect.
I would recommend to any engineer that he or she attend at least one or two conventions every year. If your employer will not send you, then plan on taking some vacation time and pay your own way. The benefits are too great.
I spent two days here and according to my pedometer I walked 14 miles through the aisles. Besides aching feet, here’s one of the biggest things that I’ve taken from this event.
The Internet of Things has become the Industrial Internet of Things. Manufacturers have embraced IoT to connect and control their production lines. But it seems to me that the manufacturing model still relies on the Ford assembly-line model and the power that comes with IIoT is underused.
I think that part of this is due to manufacturing inertia. And this is understandable, if the company is still making a profit then why change? This inertia is coming to an end due to narrowing profit margins that are required for manufacturers to remain competitive. We seem to be seeing changes due to this already.
I also think that the Maker community is becoming more powerful and is approaching the point where the community itself will become a “disruptive technology” for traditional manufacturers. Makers have moved beyond “arts and crafts” and are connecting with smaller designers and specialized manufacturers to create small runs of innovative technology products. This distributed manufacturing model is coordinated “in the cloud” and is capable of creating printed circuit boards at a cost of tens of dollars per square inch for limited runs.
As distributed manufacturers start to coordinate with each other I foresee a time when they will create virtual vertical integration, and the manufacturers themselves will be mapped into resilient supply chains. Information technology will be used to control individual machines, reconfigure production lines, and select supply chains based upon availability and costs. Manufacturing will just be another part of that supply chain, easily and quickly replaced by another manufacturer in the case of failure.
This has been called “Industry 4.0”, which was also discussed at UBM Anaheim. And one of the six design principles of Industry 4.0 is “Interoperability”. And this seems to be another sticking point in manufacturing. As of right now, there is no standard for the Internet of Things. Or rather – there are several standards that don’t play well together. And since these smart factories depend on IoT, it is imperative that these things can talk to the rest of the virtual factory.
Customer design goes in, product comes out. We’ve got a long way to go – but I think that time is measured in years, not decades. Our consumer products will someday follow the “on demand printing” model in that the product will reside in a catalog until ordered and produced.
Understanding the Challenge of Communicating Underwater: Underwater Acoustic Networks
In 2004 the National Oceanic and Atmospheric Administration returned to explore the remains of the RMS Titanic. They used Hercules and Argus, two remotely operated vehicles that were connected to the scientific ship by an expensive communications tether long enough to reach 2.4 miles to the sunken ship. No other communication methods exist that can give scientists the real-time video required to pilot the ROVs.
Fast communication technologies are relatively common. Modern WIFI routers allow muli-gigabit connection rates, with IEEE 802.11ac offering transfer rates of up to 6.77 Gbit/s, depending on hardware.
But communicating underwater is difficult. Water highly attenuates radio frequency (RF) signals, limiting these signals to very short distances or very low transmission speeds. Optical based communications are also limited due to high attenuation of the signal, node alignment problems, and signal obstruction by marine objects and turbidity.
Simple long-distance underwater communication has always been available. Since the dawn of sailing humans have heard mysterious underwater sounds that have turned out to be marine animals communicating with each other over long distances.
The underwater acoustic wave guide called the Deep Sound Channel (DSC) was discovered by Russian and American scientists. Sounds made at the depth of the DSC travel great distances. This was used during World War II in the creation of a long-range position fixing system called a “SOFAR bomb”, used by ships or downed pilots to secretly report their location.
Modern acoustic communications tries to take advantage of several properties of water – especially deep water. But it is restricted to very slow connection rates. It is also vulnerable to signal path reflections, Doppler spread, longer signal travel times, and quickly changing rates of signal attenuation due to water chemistry and depth. Under excellent conditions with very short signal paths, acoustic connect rates of up to 150 kbit/s are possible, but reliable connect rates of 40 kbit/s are more realistic.
An example of modern acoustic modems is the S2CM series by Evo Logics. One modem in this series has a connection rate of about 60 kbit/s with a range of 300 meters and a depth of 200 meters.
One of the bigger challenges of underwater acoustic communication is a quickly changing physical environment – especially if at least one transceiver node is in motion. Constantly changing underwater terrain, depth and water composition make it difficult for Underwater Acoustic Networks (UAN) to keep up. Lack of standardization between manufacturers also creates interoperability issues between different underwater acoustic modems.
IEEE Communications Magazine recently published a paper [1] detailing the use of software defined radio as a way of coping with the challenges of experimenting in an uncertain physical environment. According to the paper, University of Buffalo engineers are attempting to adapt Software Defined Radio (SDR) principles to underwater acoustic modems in the proposition of creating a Software Defined Acoustic Modem (SDAM) that can be easily reconfigured to allow better testing capabilities of various hardware and software underwater acoustic networks.
The Buffalo engineering team combined commercial off-the-shelf components and open source GNU Radio SDR software to create a highly configurable prototype SDAM that is half the cost of less agile commercially available alternatives. The SDAM prototype supports IPv4 and IPv6 Internet communications protocols.
Experiments were created by the engineering team to demonstrate the real-time adaptation capabilities of the SDAM prototype in a real-world environment and in the lab. Namely in a shallow lake, and in water tanks. Interference signals were used during these experiments and the signal-to-interference-plus-noise ratio (SINR) was measured and compared to clear channel transmissions. The adaptive transmission methods that were tested were shown to reduce the bit error rate during the transmission, as compared to more standard fixed transmission rates.
While this paper did not demonstrate a dramatic improvement of underwater data transmission speeds, it did demonstrate a way of creating a Software Defined Acoustic Modem that is less expensive and vastly more configurable than commercial models. This prototype is an excellent jumping-off platform for the further development of inexpensive SDAM based communications, which should allow other engineers to experiment with other adaptive underwater transmissions, and even with multi-path networks.
And this is where this paper becomes important. Software defined radio has caught fire in the engineering and hobbyist community, and is making great strides because of this. By piggybacking off of SDR, SDAM is making experimental underwater acoustic communication available to a wide audience.
[1] Demirors, E.; Sklivanitis, G.; Melodia, T.; Batalama, S.N.; Pados, D.A., “Software-defined underwater acoustic networks: toward a high-rate real-time reconfigurable modem,” in Communications Magazine, IEEE , vol.53, no.11, pp.64-71, November 2015
doi: 10.1109/MCOM.2015.7321973
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7321973&isnumber=7321957
NTSC / PAL Field Counter Test Fixture
Due to corporate confidentiality, I’m not allowed to show much of what I’ve worked on. The Field Counter is one of the exceptions to this rule since it was never designed for sale.
This simple Microchip PIC based test fixture was used to insert a time stamp on every field of an NTSC / PAL video signal. Since video signal frames consist of even and odd fields, it would be convenient to label each field with a unique identifier.
In my original code I concentrated on the NTSC signal. I determined the field output by sending the FIELD pulse from the 1881 synch separator to the Microchip PIC. This gave me an even/odd field determination. With a 20 Mhz clock giving me 5 million PIC program cycles per second, I had a very comfortable margin of program cycles to play around with.
The processor would determine the next number in the sequential time stamp, and would send that to the NEC uPD6450 character display chip. When the next field was displayed the character display would include the time stamp as the overlay.
The software would then “count up” to 99 hours, 59 minutes, 59 seconds, and then roll over. I figured that 4 days of continuous testing would be overkill, and didn’t code the ability to count any higher.
The original field counter saw quite a lot of use. It was used to figure out when our equipment was digitizing video out of order, getting fields or frames “confused” and exactly how.
It was also used to test the promises of products sold by original equipment manufacturing companies who wanted our company to resell their products. This is how we learned that one company was only digitizing the odd fields of incoming video, or how another company averaged the odd / even fields together in a very lossy manner.
This tool was so useful that I built version 2.0. This is the hardware I’m showing here. This newer version included a serial EEPROM and an RS-232 transceiver, giving it the ability to communicate to a PC using Hyperterminal and add text to the video screen to indicate the test being performed.
I also included menu navigation buttons, and some hardware hooks to allow this tester to “watch” an SVGA signal and send a trigger output on any given SVGA scan line. The buttons were very useful, but when I determined that video was moving away from SVGA I never bothered to implement that function.
The field counter became very useful in other ways. I would hand these devices to our interns and have them modified for various purposes. There was even a prototyping area on the circuit board that allowed for easy modifications.
One modification by a very bright intern allowed the device to control the PAN/TILT/ZOOM functions of a camera by the use of a standard television remote controller. This was a proof of concept idea, that a cheap controller could be used to control a security system. Unfortunately Marketing didn’t approve.
Another modification allowed the platform to monitor a thermocouple while controlling the heat in a standard $30 toaster oven. This would allow the toaster oven to be used as an electronics reflow oven. I have used this oven to successfully reflow a couple of boards that I’ve made.
Unfortunately, since this was simply test equipment I’ve never been good about source control for the firmware. Over the years several interns have hacked up the source code, and I no longer have my original. You can see that in the code here.
I still have one of these units to play with. I may re-write the code in C++ and upgrade my reflow oven. But there’s a lot of other things on my plate these days, and this is not a high priority.