Open Source - AUUG'99
Embedded UNIX: Booting PicoBSD on a PC104 board.

Enno Davids - Metva P/L.


One of the benefits of the open source movement is access to the source code. This allows us to reshape the code to our specific needs. This paper then examines some thoughts on how to reshape an open source UNIX to make it at home in an environment quite unlike the desktop or server environments for which it was originally written.

The target audience

Over the years I've done a fair bit of embedded systems work on a variety of target platforms and with a variety of development platforms, with a variety of support tools and under a variety of conditions. One of the more common features of the last few projects has been in inclination to re-create a POSIX programming environment around the often rudimentary facilities offered by the various monitors, real-time operating systems and often bare metal of the embedded environment. The desire for such an environment is obvious. It allows us to leverage our experience in more conventional computing environments to the embedded environment and if sufficient fidelity is offered to carry code directly from one environment to the other. Even if we can only crudely mimic the POSIX facilities, the benefits of abstracting a chunk of RAM into files in a managed RAMdisk environment shouldn't be underestimated. Similarly, abstracting devices and communications offers us similar advantages. Ultimately significant productivity gains can be had by such abstractions and the others which go to making an embedded POSIX environment.

The cost of such abstraction is also equally clear and it is of course performance. The smallest embedded systems run on chips barely a few kbytes of memory at clock and instruction rates which will not set the world on fire. These environments are obviously unlikely to support such abstractions and indeed it is reasonable to expect them to programmed in hand tuned assembler for some time to come in the name of cost reduction and ease of manufacture.

As clearly though, the CPU market also now has players in it which offer brute force to spare, lots of address space and still achieve modest pricing. We have only to look at the current generation of game consoles to see exactly how cheaply an R4400 based RISC system can be built and sold and what level of performance it can achieve. Similarly, the components which make up desktop PCs are now commodity items and are available in such large quantities and at such low prices that they make attractive foundations on which to build modestly priced, high performance hardware. The modern PC is itself a case in point and as we look further afield we find scores of board level and system level components available off the shelf for the would be systems integrator.

When we come to running such systems though the situation can be somewhat more bleak. Many organizations will buy or build quite capable hardware only to blink when it comes time to build the software environment. Many projects hence do without much more than a boot monitor to load an application and hand over control to it.

Some more enlightened souls either write or acquire a simple tasking package and then organize their product as a set of tasks. This typically allows the implementor to step back from juggling interrupt loops to perform vital work and allow the modularization of the product to a greater degree. The use of tasks often allows work to be effortlessly parallelized which might otherwise be forced to be serially executed. Tasking also brings with it inter-process communications and often such projects are only a few steps away from deadlock.

Some step a bit further a long the spectrum to acquire a 'real-time operating system" or RTOS. RTOS's are chiefly characterized by guaranteeing service times from an incoming interrupt to launch of the interrupt service routine to handle it. This then makes these products suitable for use in life support systems, vehicle control functions and indeed any problem domain where unquantifiable, non-deterministic delays are unacceptable. Most customers though do NOT have problems that require such rigor and really only choose to use an RTOS because of a perception that the product has been developed more thoroughly and tested more vigorously. This perception may be bolstered by a product claiming conformance to any of the number of RTOS standards which are around.

Almost all of these solutions tend to suffer from various shortcomings of their interfaces. Often described as simplifying assumptions, these shortcomings often result in increased effort both developing or even merely porting code. For systems which are developed once and essentially abandoned to the market this may be tolerable but for systems which expect an ongoing maintenance/upgrade cycle or indeed for more generic platforms used for multiple products, this means increased development effort and increased time to market. The solution is usually to wrap the offending interfaces in a new layer designed to hide them under a more standard API such as the combined ANSI and POSIX API offered by most modern operating systems.

Given that few of us are actually implementing life support functions so much as automating light switches, alarm systems or building "Talkie Toaster", we can tolerate occasional unexpected or perhaps even regular millisecond delays. This in turn means we could be choosing a cheaper alternative operating systems which emphasize qualities such as POSIX conformance at the cost of interrupt determinism. The drive to ever faster commodity hardware means such millisecond timing variabilities are also decreasing in magnitude as the raw CPU speed sees to it that the time spent by kernels in critical sections with interrupts blocked are similarly decreasing. Indeed, many of the timing concerns of the past are being subsumed on modern hardware by delay loops designed to slow things back down to usable speeds. As John Mashey noted at AUUG years back, "the bandwidth of a human being remains essentially unchanged".

The open source UNIXes then represent excellent choices as operating systems if for no other reason than their open source nature means they can be retargetted to new hardware environments with minimal effort. In return for this we gain a feature rich operating environment with filesystems, device abstractions, good support of lots of hardware, communications, networking, servers for most popular protocols and a myriad of other features.

The nature of modern PC and PC derived hardware also means that the effort of supporting such OSes in an embedded environment is more trivial than it has ever been before. The tyranny of hardware compatibility under which Microsoft's DOS and Windows have dwelt for almost two decades now does mean that any x86 target system if not a complete clone of the desktop PC hardware environment, will have departed from the model in as few areas as possible so as to not impact the ability to run DOS and Windows. So, what sorts of hardware can or should we contemplate.

PC hardware

PC hardware, as all the people who've used it for more than a brief period will be aware is a very mixed bag indeed. Stories abound about PCs which reset when the speaker sounds the BEL character. (Inductive coupling of the speaker waveform into adjacent clock or reset circuits.) Many of these basic engineering problems can be ameliorated by simply not choosing lowest cost clone hardware and indeed this desire to buy properly engineered, designed and constructed hardware from reputable dealers is often the genesis of the brand loyalty some people have for big name PC brands such as Gateway, Dell and Compaq. Part of paying for a name is a belief that to protect that name, the brand will not allow shoddy design to escape from the factory floor.

Even when the basic hardware is well designed, some inherent problems still lurk in the PC. Once again the experience of juggling interrupt allocation, I/O space allocation, memory use and DMA channels is one scarcely any PC owners will fail to recognize. Mac owners may be luckier here. Its an odd sort of a world really where the lowliest user is expected to know the arcana of DMA channel use, cascaded interrupt controllers and the implications of asking devices to share interrupt lines. Plug & play is of course intended to be the panacea here, freeing the user from the drudgery of trying to derive a workable configuration although the true savior may yet turn out to be USB and IEEE 1394 Firewire which between them promise to drag most of the PC's peripherals off the backplane (and its low level hardware concerns) altogether.

Even more appalling is that when you've constructed a working PC, it may well still be a poor match to the problem at hand. The roots of the modern PC in IBM's desire to emulate the success of the Apple II with its built in BASIC, rudimentary operating system (or rudimentary program loader as some would have it), expansion slot architecture and the hardware abstraction of a predefined BIOS layer haunt us to this day. The BIOS died most quickly, with people going beyond the official entry points it offered to jump into various routines which merely happened to be there when they looked, essentially freezing the BIOS in place. What had been the principal tool intended to allow hardware abstraction, now became one of the most powerful drivers to making all the hardware look and be programmed the same. The BIOS also quickly became little more than a boot support device as the drivers it offered no longer fit into an increasingly complex DOS or Windows or indeed never even came close to being of use to UNIX.

The expansion slot architecture is the other area where PCs face regular problems. Those problems are periodically jettisoned through the practice of inventing new buses, but the problems still remain, principally that different boards from different vendors make differing assumptions which may occasionally not be compatible with each other. (Hands up everyone who remembers having to have the memory expansion in a special slot of an Apple II so that the CPM card could live next to it and the floppy controller needed to be at the far end of the bus and so on. New cards were inserted in vacant slots and negotiated around the slots until a configuration could be found where as much function remain available as could be found). Even worse, the whole reason for defining new buses is usually to address the fundamental speed issues an old bus will have, typically being designed with earlier generations of CPU in mind. This of course is true for most computer systems, with busses which were adequate for one generation inevitably becoming the principle bottleneck in the bandwidth of later generations of CPUs.

A bigger problem with PC hardware is often merely that they were designed for a specific niche, that of the desktop computer system, and in an embedded systems environment they are simply a poor fit to the problem. A good example here is simply the mechanical arrangement of PC hardware. The arrangement of expansion boards rising perpendicularly from the motherboard is great design for a fixed desktop or floor standing device but it is ill suited to less static environments like vehicles and vibrating equipment.

The other problem with plain PC hardware is the assumption of a video monitor and a keyboard. Many embedded systems are built with neither. This can cause trouble, especially with older BIOSes which may refuse to function without the relevant hardware available. Plugs which emulate a keyboard can solve that but it can be an extreme solution to supply a graphics adaptor merely so that a BIOS can think it has somewhere to send output.

More seriously, the BIOS which sends output to such a device in the hope of gleaning a response from the user can essentially lock up a system. Even if its not locking things up completely, the BIOS may exhibit other behaviors which are not completely desired, such as delays at startup to allow a user to reconfigure the non-volatile memory. In the absence of a display, a keyboard or a user, this is just a waste of boot time. Later we'll discuss making the boot process faster but as a rule, we want to eliminate extraneous delays wherever possible.

Well, its obvious that commodity PC hardware has a lot problems when deployed as an average embedded system. The reason why we persevere with it of course is precisely that it is commodity hardware and frankly its high performance commodity hardware for quite modest prices. Whether the price advantage is worth the extra effort in modifying its behavior to an acceptable state and the shortcomings of the physical hardware can be accommodated is a judgement call for each situation.

PC/104 hardware

Well, having dwelt on the shortcomings of PC hardware we can now look at the most likely alternative. As little as a decade ago most embedded systems were constructed around a variety of busses and board form factors. With the rise of the PC though, a desire to leverage PC hardware into the embedded marketplace led to a some more standardization taking place. Today this has led to most embedded systems being built either around passive PC (i.e. ISA or EISA) bus backplanes or PC/104.

PC/104 is essentially a reworking of the classic PC bus into a physical form which allows more compact packaging. Each board in a PC/104 system is built to form factor of 90mm x 96mm. CPU cards are often larger in return for bundling a larger feature set on a single card.

PC/104 cards are 'self-stacking', which is to say they interconnect through connectors oriented above and below them in a stack of modules without the need for a backplane or card cage. Mechanically, mounting points are at each corner of fairly small card size allowing secure mounting to be arranged and increasing the ruggedness of systems built this way.

Historically, the small card size meant that each card tended host only a single function but the appearance of ever more highly integrated components has meant that it is now no longer unusual to find multiple functions hosted on a single card. As noted, CPU cards also tend to be larger and in return integrate many functions of the modern PC. Often the favoured form factor is a footprint compatible with that of a 5 1/4" disk drive and cards of this size commonly now carry CPU, RAM, ROM, Flash disk, IDE and/or SCSI, SVGA with drivers for both video displays and LCD panels, ethernet (often both 10M/b and 100M/b) and the obligatory slew of lesser I/O functions such as keyboard, serial, parallel and nowadays USB. Often manufacturers offer these products with selected function not populated as cost reduced options. Additionally, reflecting their embedded systems focus these cards tend to host extra features such as RS-485, watchdog and bus timers, power fail detection and similar abilities which can significantly enhance the reliability of a system deployed to non-office environments.

These single board cards often carry all the function required for many projects. CPU's range from 386SX up to high speed Pentium III systems. The embedded peripherals combine to reduce the need for expansion cards which in turn increases system reliability. Often the presence of the PC/104 bus is purely on the off chance that a more exotic feature is needed (say a PCMCIA slot?). Nevertheless, PC/104 is a good fallback for some of the cheaper cards. Recognizing the desire for higher throughput PC/104plus defines an extra connector which is essentially the PCI bus of a PC repackaged in the stackable manner of the standard PC/104 card. This allows a PC/104plus card to essentially 'choose' which connector to use as its source of signalling.

As was noted some CPU cards now also offer USB connection which opens yet another interface on which peripheral function can be hosted. The ability to extend some distance allowing CPU and peripheral to be spaced apart will also be valuable in some environments.

All in all, the PC/104 form factor allows rugged, flexible hardware to be built which still offers the ability to leverage off economies of scale of the commodity PC. The small footprint and low(er) power requirements of these cards mean they are more readily accommodated than their bigger brothers. Similarly the extra embedded feature set means they offer better service than generic hardware does. The only real downside to the whole equation is that of cost. Commodity PC hardware enjoys economies of scale unlike almost any other product of the modern age. That makes those commodity PC's cheap as it does their peripherals. But its worth shopping around as single board systems can be had for as little as US$200 for an entry level 386SX system and Pentiums starting at around US$450. USB mau mean that cheap PC peripherals can be used in place of more expensive PC/104 bus hosted hardware.

Other hardware considerations

First and foremost in the embedded environment its worth considering some of the things which make the environment special. Lets look at some of these.

Some of the hardware offers FLASH disk as a storage medium. For those unfamiliar with this, it is a form of semiconductor memory based around the FLASH EPROM technologies. As such its reasonably fast by the standards of disks although paradoxically slow by the standards of other memory technologies. Its attractiveness in this environment is that it is both non-volatile (without batteries or other power sources) and unlike conventional EPROM both electrically programmable and erasable.

FLASH disk may be organized in one of a number of ways. Most common till recently was to simply make some FLASH memory available in the address space of the CPU. Files structure is then organized over it in much the same way a RAM based disk is built. The next approach is to build a controller chip which interfaces between the FLASH memory chips and the rest of the system to ease the task of controlling these memories. This is attractive as the timing requirements of these chips can unnecessarily impact the performance of a memory. The interface presented to the rest of the system can be any of a number of interfaces including memory bus, I/O mapped, PCMCIA or most recently SCSI and IDE. These latter two make the FLASH memory essentially behave as a 'real' disk drive would and hence offer almost zero cost integration of a no-moving-parts storage subsystem. Lastly, a hybrid of these two approaches now exists in the form of a product called DiskOnChip(tm). As the name might imply it is a single chip FLASH disk and controller which fits into a standard EPROM socket. Driving the DiskOnChip requires a special driver as the sometime large FLASH drive needs to be paged into and out of the EPROM socket's smaller address space. The manufacturer has in fact seen to it that an open source driver for FreeBSD is available for their product, although there are mixed reports about its quality.

A quick word on FLASH disk size is in order here. The smallest FLASH memory based storage was typically on the order of 1/2Mb to 2Mb. This represents a few chips and is usually sufficient for a BIOS or such. IDE/SCSI based FLASH disks tend to start round 1Mb and typically can offer up to a few hundred Mb of storage. DiskOnChip similarly starts with modest offerings and approaches the triple figure Mb range. This begs the question of adequacy. Few of us would contemplate configuring a desktop machine with as little as 100Mb of storage today. It is clear however that the Gb`s of storage we routinely buy now are not necessary either. Well as it happens you can cut down the size of a modern UNIX considerably. PicoBSD indeed is just such a cut down version and its design goal is to squeeze onto a single floppy disk. That means virtually any size FLASH disk will do. In fact, the larger ones allow some of the more enthusiastic compressions of PicoBSD to be abandoned in favour of greater flexibility and easier upgrades.

Having touched on custom drivers for the FLASH memory, its also worth noting that all those extra great features we mentioned like watchdogs and the like will need some kernel support too. Happily the hardware is usually built in such a way that with no driver support they don't activate and hence the watchdog doesn't continually reset the system unless a driver enables it. Still, to use the function, it must typically be programmed and then enabled. Similar observations hold for everything from RS-485 to more specialized hardware. At least one offering has an integrated video frame grabber which is unlikely to enjoy kernel support off the shelf.

Its worth noting also that boards with exotic features and especially exotic features for which you have no use may be contributing to the higher cost of PC/104 hardware. Its worth sitting down in fact and doing a quick requirements analysis. Think carefully about CPU options, the peripherals you really need and the the performance needs of each of them. As noted PC/104 hardware is more expensive than commodity PC hardware but carefully considering project needs can easily control the cost of the hardware.

Finally, there is an issue of custom hardware. Occasionally its necessary to add to off the self hardware to extend its capabilities into some new area. One of the most common of these with UNIX is to ensure that system power can only be removed in a controlled manner. In general this means that when the user initiates power off, the power is maintained for a short time by some extra switching under the control of the OS while it flushes disk buffers and the like and then allows power down to complete. This seldom takes much more than an extra relay and a programmable output line to control it. Indeed many embedded systems boards have just such unassigned I/O lines and one can often be suborned to this purpose. Once again, a driver needs to be available.

UNIX for the embedded environment

UNIX, at first blush, seems an unlikely OS to host and embedded system. Indeed, in the past UNIX was often unfavourably compared to PC based products and the words bloated and resource hungry were bandied about. Compared to DOS this is and was true. Of course since that time Windows has arrived in various flavours and now features a disk footprint of truly gargantuan proportions, needs lots of CPU to keep it running and prodigious amounts of main memory too.

In the case of windows when the call came to make it into something for cable TV set top boxes, and palmtop PDAs and the like, the solution to paring down Windows to the bare essentials was to produce a completely new version of Windows called Windows CE (Compact Edition or COnsumer ELectronics depending on who you care to listen to). To make a UNIX fit we will do many of the same things to it that were done to WinCE. We will consider few more too that WinCE may not have implemented.

First and foremost we want to ruggedize the OS as much as we can. That means eliminating places where UNIX (say) panics and waits for a key to reboot. That's bad behaviour as it assumes the user wants to know there's a computer in there somewhere. Most of us would be just as happy to never know this. Its worth noting that the Windows blue screen falls in this category too of course.

One of UNIXes biggest concerns is the filesystem. UNIX makes good use of buffering both at the disk block level and at the inode level. A consequence of this is that sudden loss of disk power can mean that the filesystem can be corrupted. One approach then is to stretch the window at power down to allow pending writes to complete. It is important though that the system doing this is made fool proof as no one wants to find that a crash in the power down circuit kept power on all weekend until the car battery (say) was completely flattened.

An alternative approach is to build a kernel which uses a locally constructed RAM disk. Booting can then be done from a write protected FLASH disk which can also be mounted for read-only at run time for access to system utilities and the like. When power goes off no state needs to be saved for the RAM disk or the FLASH disk. If your application needs to log data, some provision will need to be made, but now the window required is much shorter.

Fast power down is something we can thus see if fairly critical. Constraining the work to be done at power down is the obvious approach to make this manageable. Fast startup is also an equally important goal. The way to achieve this are all a bit more prosaic and will be discussed in the next section. All in all though, the behaviour of the system when power is applied and when power is removed is of critical importance. No one really wants hardware that takes minutes to get going and extended grace periods to shutdown. In the face of power being removed a grace period may be impossible, so resiliency and good power fail detection are vital.

So there are some fairly basic things we want to do to speed up the processes. First off, remove device drivers from the kernel which aren't used. There's no point carrying around lots of drivers for devices which will never be available in your environment. Removing such drivers makes the kernel both smaller and hence faster to load and reduces the amount of time at boot the kernel spends probing for hardware during the autoconfig process.

In fact, that theme may be worth continuing by reducing the options that real devices have available to them. Once again there's little point in allowing completely flexible allocation of serial ports when we know the runtime environment will always have 4 serial ports by virtue of these being a permanent fixture of the CPU card. Relatively modest changes to the driver sources can immeasurably speed the process of bringing the system up.

Next we consider the most basic way possible to speed up system boot. In the startup scripts in /etc, don't run services you don't need! The list of candidates here is big. Some of the more obvious choices are sendmail, snmpd, anything X11, syslogd, nfsd/biod, samba. There's little point running services you'll never use. syslog for instance is better configured reporting to a remote system than trying to accumulate log entries on the embedded device itself. inetd is an interesting question. It's quite possible that there's nothing in inetd which you actually NEED. telnet support is likely the closest, and often the non-inetd based ssh is preferred now. If this is so, inetd itself can be jettisoned. If not, it can be pared down to remove extraneous services itself.

Lastly, for those services which are still being started, its worth deciding when they need to be started. Often they only need to be available sometime after startup, as opposed to the strict order they are brought up in the scripts. A strategy to make boot appear quicker then is to run the actual application we're building before some of the support services. It can then, in the time honoured tradition, draw up a splash screen while the rest of the system comes up. So called "out of order" startup can be exploited fairly well to make things seem like they're up and running moments before they might actually be functional. The all important impression the user has is that the device is running, not that he or she's still waiting for a slow device to spin up and deliver its first message.


PicoBSD is one of a number of small UNIX kernels which now dot the free UNIX landscape. Some of these are pico BSD, the Linux Router project, Trinux and rtbt. Like its brethren the principle aim of picoBSD is to produce a UNIX which can boot from single floppy. This is merely so that single floppy router boxes and the like can be constructed without needing to dedicate a lot of hardware to the task.

To get things down to a single floppy though, lots of things need to be done and a few compromises made. Firstly many of the modifications mooted above are made, removing all the extraneous services which simply have no place in a configuration such as this. In some cases services which are desired are replaced by lighter weight alternatives. So for instance, if you want a Web server, a small single thread, low performance one is provided rather than a full blown Apache with SSL support. Its enough to serve Web pages but it won't form the basis of a high performance server farm.

To further complicate things, many of the utility programs have been crunched. Crunching is a throwback to the good old days. Before shared libraries came along, some vendors of Commercial UNIXes were shipping systems where significant bits of their system were composed of only a few binaries. Sun and the suntools interface which preceded their X11 offering spring to mind here. To ameliorate the effect of the large suntools libraries being linked to even the smallest application, the apps were all bundled as one or two binaries which did everything. Fewer binaries made code sharing in the running system more effective and meant that 40 utilities didn't each have a complete copy of the 1Mb libraries linked to them.

PicoBSD takes the same approach by providing a utility called crunch which allows arbitrary programs to be packaged as a single binary which determines what its doing from the names its called with. This allows the C library to be shared on disk as well as some bits of common programming like command line processing, signal handling and the like.

Once running, PicoBSD is configured to run from a RAM disk based root file system. In the environment it comes from, this allows the boot floppy to remain write protected and unused in the floppy drive. In our environment it allows us to boot from FLASH disk to RAM based root and still keep the FLASH as a read-only device as we discussed earlier.

One of the downsides of picoBSD is that the whole process of building the environment is a little convoluted. It also tends to lag the stable releases of FreeBSD a little. Fortunately much of the awkwardness is due to the need to squeeze every single last drop of storage out of the system to get it onto a floppy. As we saw earlier, the smallest FLASH disks are already larger than a single floppy and sizes of 10Mb and 20Mb can be had for modest prices. This makes the whole process of building picoBSD a lot easier and configuring maintaining it easier still.

And that's about all there is to say really. The combination of a relatively standard PC hardware environment, albeit in a much different form factor and with a much different design philosophy, linked with picoBSDs broad support of different hardware types and open-source kernel mean that few problems are encountered and those that are can be quickly dealt with.


PicoBSD slips right on to most modern embedded systems boards without any great pain. Once there it provides a feature rich environment ideal for building web/net enabled appliances, devices with access to lots of modern OS facilities, a portable cheap POSIX program environment and with its open-source the ability top tweak the system in any direction that is appropriate to the problems at hand.

The biggest challenge faced when designing such a system is over-specifying the run environment and paying a price penalty. If you're a one off (like the car based MP3 player that sparked my interest in this) its not important. But if you plan to build 100's or 1000's or perhaps even just 10's of these units, then such over-design will only serve to needlessly inflate your costs and your unit price. PicoBSD contributes to the trap by being able to drive most of the extraneous hardware you might be inclined to add. But, it is a sad state of affairs when the most egregious shortcoming we can find in a product is its ability to handle more hardware than we need! Would that it were more often the case.

PicoBSD and the ease with which it is deployed to an environment which is not the desktop PC it was constructed for is often the most anticlimactic part of these whole systems.


Rather than try to cite the various people whose work I've leveraged, I've opted here for merely listing some of the resources which are useful when you're pursuing this idea or similar ones. Good Luck.

PC/104 standards

FreeBSD project

PicoBSD project


Technologic Systems. (makers of the cheapest 386SX board I've yet seen)

The Linux Router Project

FreeBSD rocks!

The up-to-date version of this paper

Open Source - AUUG'99 Home | AUUG Home | Site Map | Email comment / Last updated: