Pine64 DIY NAS Build

The Backstory

In late 2006, I began to notice that I was accumulating a lot of data on a lot of different hard drives. After a catastrophic disk failure on one of my systems, I decided that it was time to invest in a RAID-based storage solution. Being me, the decision came down to buy vs. build. My free time was about to evaporate, as we were preparing for the birth of our first child, so I opted to buy, hoping that the expandable RAID solution of the Infrant ReadyNAS NV+ would save me build, configuration and management time. It did! It was a great solution.

The old ReadyNAS NV+ in its original environment

Fast forward through 3 complete (and a fourth, almost complete) disk set replacements, a power supply replacement in 2011 and a fan replacement in early 2020. I was running out of space, and the device didn’t have official support for individual drives larger than 2TB according to my last research. I had also been receiving numerous SMART errors from one of the disks for years, even after replacing that very disk a couple times. To me, this suggested an issue with the NAS, as the disks always passed SMART tests on another machine.

My Use Case

In anticipation of a new NAS solution, I got to thinking about how I wanted all of this to work together. What was my workflow? Did cloud storage finally make financial sense? Were SSDs cost-effective yet? I had lots of questions. During this planning phase, I recalled that my friend, Ted, had told me about a good deal on cloud storage with OwnCube. They are a hosted NextCloud provider. They sometimes offer a one-time flat fee special for unlimited storage so long as you have at least their basic plan. This made economic sense for me, and I bit the bullet. This would solve the storage capacity problem. I also wanted to run a home media server (not from the NAS, necessarily), which would require relatively low latency access to my files. I decided that having a mirror of what was in my cloud storage would be a good solution to that problem.

The basic usage model for my data storage solution, including cloud and NAS storage

The need for a simple local mirror of my data drastically changed my requirements for a NAS. I didn’t need to worry about protecting against multiple disk failures. Really, I wasn’t worried about disk failures at all, as it would only impact the local copy of my files. This eliminated the need for hot swap bays, and any RAID levels beyond RAID-1 (disk mirroring). In fact, I ended up not using RAID at all, but we’ll get to that later.

My requirements/wishlist ended up looking something like this:

  • Relatively small form factor and quiet
  • RAID-1 or equivalent storage protection (would like ZFS)
  • Simple configuration
  • Hot swap bays not necessary
  • NextCloud sync
  • Prioritize read speed over write speed
  • Support NFS

So, my solution was fairly simple. I’d build a 2 disk mirror that would run the NextCloud client and sync all my files on a regular basis. How Hard Could it Be?™

In my research, I came across a variety of solutions that would meet my needs. So, I took the opportunity to try something new. PIne64 builds small, powerful ARM-based SBCs. I thought their RockPro64 would be a perfect fit for my new NAS. They even sell a NAS-friendly enclosure made specifically for the RockPro64. It helped sway my decision that Ted built the same NAS a few months prior and reported satisfactory results. So, I loaded up my cart with the recommended parts and waited for my kit to arrive from China.

The Build

Putting the physical pieces together is quite well-documented, so I won’t detail it here. Assembly time was probably under 30 min., though I didn’t time it. When I received the SBC, case, and other parts, I hadn’t received my hard drives yet, so I set up the system with a pair of older 2TB disks that were spares for or rejects from my old NAS. It wasn’t important that they be reliable since I really just wanted to test out a few things with them. In fact, I ignored the errors that I saw, chalking them up to bad disks (spoiler: it ended up being a problem with the SATA controller).

Since I only needed to purchase 3 disks (the third for backup/hot spare), I thought I should buy higher capacity disks. After uploading all my data to my NextCloud instance, it appeared that I needed a tad more than 4TB of storage. Thanks to my friend Brian’s blog post, I decided to try shucking some low-cost 8TB external drives. They happened to be on sale for less than buying a bare internal drive of the same capacity. In my research I came across a technology called SMR. The big drawback with SMR drives is that write speeds are particularly slow, though read speeds are not affected. For my use case, this would not be an issue, so I didn’t have a problem if my shucked drives turned out to use SMR technology. Once I received my drives, I followed instructions from a few videos specific to my model and found that the drives inside were not SMR drives as far as my research revealed.

According to the Internet, Seagate drives larger than 4TB are non-SMR

I removed the 2TB drives and swapped in the 8TB drives. This took several minutes since the NAS case doesn’t have hot-swap drive bays. The first thing I noticed on power-up was that the new drives were much quieter than the old ones. Pleased with that discovery, I set about configuring the software.


My goal was to keep the system setup as simple and “stock” as possible. In the event of a disastrous failure, I wanted to be able to get back up and running as quickly as possible.

Selecting an OS

The first OS/distribution I tried was OpenMediaVault. I downloaded the RockPro64 bootable image and wrote it to a microSD card using Etcher. I really wanted this to “Just Work”, because it would save me a lot of time and effort. The only thing really missing in my mind was ZFS support. Since this NAS would simply be a mirror of my cloud storage, I was willing to sacrifice ZFS support for ease of administration. Unfortunately, I couldn’t get OMV to work. It booted up just fine, but I was unable to get the web-based configuration tool to successfully store a static IP address. I decided to try setting static IP information from the console, but that failed as well. They have a convenient command line utility for configuration, but for some reason, it failed and exited when I tried to set the IP address. So, rather than try to debug this, I decided to move on and try the Ubuntu 18.04-based ayufan image. I was able to get this up and running with no issues.

Building the ZFS Kernel Module

The wrinkle here is that Ubuntu 18.04 doesn’t have a ZFS module by default. So, I’d need to build and install the kernel module myself. I know I said I wanted to keep things stock and simple, but this is the one area where I strayed from that principle. And, having done it now, it wasn’t so bad. To get started, Ted pointed me to this forum post, and pointed out that step 3 contained the key elements in getting ZFS built. The steps I completed were as follows (I have not gone back and rebuilt from scratch after I got this working, so I can’t promise this is 100% accurate):

  • install python 2 (may not be necessary)
  • install dkms
  • install spl
  • sudo apt-get source zfs-linux
  • sudo apt install zfs-dkms
  • sudo dkms build zfs-linux/0.7.5( this succeeded and then sudo modprobe zfs succeeded)
  • lsmod shows it installed

Creating a ZFS Mirror Pool

Now that I had a working ZFS kernel module installed, I did a bit of research on ZFS pools. I discovered that I didn’t need to use RAIDZ2, I could simply create a ZFS mirror pool. I followed these instructions to create my mirror pool. The big advantage I saw in the simple mirror pool was that maintaining it appeared very easy. Disks can be added and removed from the pool. It’s even possible to add a disk, allow it to make a copy of the pool data (resilvering in ZFS parlance), and then remove the disk. This could be used as a makeshift backup in a pinch.

Fan Control

The fan that came from Pine64 connected directly to the SBC via a 2-pin connector, and is PWM-controlled. I tried setting the fan manually according to this forum post. However, my fan didn’t move. After an embarrassingly long time of trying to get this to work, I finally took the case off to check that the connector was properly attached and realized that one of the SATA cables was fouling the fan blades. I tucked the cable away from the fan with a good clearance, verified that the cable was connected properly, and tried manually setting the fan on. It worked. In my debugging of this issue, I came across a tool called ATS and installed it. The install was a tad convoluted, but it worked. There is a config file to control threshold temperatures, and it runs as a systemd service.

NextCloud Client

Syncing to my cloud provider is the key function for this entire build. I found that there is a command line option for starting a single sync action using the nextcloudcmd command. Unfortunately, this requires installing the entire NextCloud client, which isn’t onerous, but includes a lot of GUI libraries that seem unnecessary. I’m also not a huge fan of adding PPAs willy-nilly, but this is the major feature I wanted for my NAS, so I made the exception.

At some point in the future, I will create a simple systemd service that runs the nextcloudcmd command every 5 minutes. It will first check to see if the script is already running. If it is not running it will execute the command, performing a 2-way sync.

SMART Monitoring

I also installed the smartmontools package that allows execution of SMART tests and fetches disk status. I may create a weekly report that gets mailed to me if I get around to it.

UPS Monitoring

My old NAS had a UPS monitoring feature, so I wanted to keep that. I installed the apcupsd package, but I still need to set the threshold times for shutdowns. This should also alert other systems to shutdown as well, but I haven’t come up with a good solution for that yet.

Filesharing via NFS

On my network, all of my hosts support NFS, so that is the only remote filesystem I set up.


My old NAS was abysmally slow, and I was hoping that the new NAS would far outpace it in performance. I was not disappointed! I use iperf3 to test out basic network performance. The NIC on the RockPro64 was able to achieve between 937 and 943 Mbps in either direction.

During file transfers, I regularly see about 54MB/sec (432mbps) with large files through the Gnome Files GUI.

I realize that these are not exhaustive results, but they are indicative of what I’ve seen over the last few months of using the NAS, and give a rough idea of what to expect from this configuration. These are much better numbers than anything I ever saw on my old ReadyNAS.


The new Pine64 NAS doing its thing

So, that’s all the basic details for the build and configuration of my Pine64 NAS. I’m really pleased with the performance of the system, and the build quality of the NAS enclosure and the RockPro64 SBC. I’d definitely recommend the Pine64 NAS if your use case is like mine. The only thing that gave me problems was the PCIe SATA controller I purchased from the Pine Store. If you’re not interested in the gory details of that “adventure”, you can stop reading here. Otherwise, this is a good place to talk about …

Trouble in Paradise

Well, it sounds like everything went swimmingly, doesn’t it? If you’ve made it this far, you’re looking for the ugly truth. Remember those disk errors I ignored at the top of this post? Those should have been a big, fat warning. Unfortunately, I dismissed them as problems with the old disks.

The first symptoms that appeared were

  • Sudden CKSUM errors on one of the disks in the ZFS mirror (/dev/sdb), resulting in a “degraded” state, and then an “offline” state, indicating that the disk was unavailable. The disk was subsequently automatically removed from the mirror by ZFS.
  • When the disk became “unavailable” in the pool, smartctl was unable to read SMART data from the disk
  • Unexpected system reboots, usually during large file transfers

Rebooting and resilvering the mirror usually resulted in a repaired mirror, but it wouldn’t be long (24-48hrs) before the mirror failed again. The errors occured on both the original 2TB disks as well as the new 8TB disks, and usually affected the same device, /dev/sdb.

The problem(s) had to stem from one (or more) of the following components

  • SATA cables
  • Disk drives
  • Heat
  • Power supply
  • My custom-built ZFS kernel module
  • SATA controller

So, I set about to isolate each of the components to determine where the issue cropped up. The simplest and least expensive thing to check was the SATA cables.

SATA Cables

First, I swapped the cables with each other to see if the errors followed the “bad” cable. After swapping the cables, I rebuilt the mirror pool and started the NextCloud sync. Errors still appeared on /dev/sdb

To me, that result supported the notion that a bad cable was not at fault.

Next, using the “bad” 2TB drive (/dev/sdb) that reported errors earlier, I

  1. Connected the disk to a SATA-to-USB adapter using the “bad” SATA cable from the new NAS enclosure and connected it to another computer
  2. Ran short and long SMART tests on it
  3. Created a ZFS pool with the single disk
  4. Copied over 100 GiB of large files (DVD ISO images and .mkv files) to the pool
  5. Ran zpool scrub on the pool

No errors!

Then, I Connected the “good” 2TB disk (/dev/sda) to the same SATA cable and external adapter and repeated the above steps.

No errors.

Next, I connected the “bad” 8TB disk (also /dev/sdb) to the setup and repeated the steps.

No errors.

In my mind, this effectively eliminated both the “bad” cable and the “bad” disk as the source of errors, since I could write a large amount of data using the “bad” cable and “bad” disk combination on a different system.

Disk Drives

I read many years ago that some USB drive adapters don’t pass SMART data through to the host system properly. So, to more completely test the disks and eliminate this possibly false information, I used the power adapter from the SATA-to-USB adapter and connected a SATA-to-eSATA cable between the drive and the second computer, which has an eSATA port. I then ran steps 2-5 again on both 2TB disks and both 8TB disks. This configuration had the added benefit of isolating the disks from the potentially “bad” cable.

No errors.


Thinking through the past few days of experimentation, I recalled that when I did not limit the data rates for nextcloudcmd, the reboots occurred more frequently. So, I set the limits to 100mbps up and down and re-ran the sync. I noted that it took several hours longer for the reboots to occur. I also saw a few mentions in the Pine64 forums of the ASMedia asm1061-based SATA controller getting very hot. My controller was based on the asm1062 chip, but was functionally equivalent to the 1061. I noted that the chip got painfully hot to the touch, so I removed the cover of the NAS to allow more cool air access to the SATA controller and re-ran the sync. This appeared to gain a little more stable time before the CKSUM and SMART errors cropped up again. I decided to try attaching a small heat sink to the SATA controller chip. I ordered a few sets, rationalizing that if they didn’t solve my problem, I could just use them for my ever-expanding collection of Raspberry Pis. After attaching the heat sink, I performed a zpool scrub (~8hrs) and saw several CKSUM errors again, but no SMART errors. The mirror did not degrade this time. This looked promising! Until a few minutes later when the pool degraded and /dev/sdb was removed from the pool again.

By this time, I was quite frustrated, having spent the majority of a week tracking down this issue. But, I decided that this was an interesting enough problem to spend time on, so I persisted.

Power Supply

The next item to isolate was the power supply. Why the power supply? I began digging through the kernel log and saw lots of messages similar to the following:

Jul 1 16:04:17 gaia kernel: [87323.406416] ata2.00: exception Emask 0x1 SAct 0x60000000 SErr 0x0 action 0x0Jul 1 16:04:17 gaia kernel: [87323.407431] ata2.00: irq_stat 0x40000008
Jul 1 16:04:17 gaia kernel: [87323.408014] ata2.00: failed command: READ FPDMA QUEUED
Jul 1 16:04:17 gaia kernel: [87323.409076] ata2.00: cmd 60/00:e8:c8:57:ed/01:00:f3:02:00/40 tag 29 ncq 131072 in
Jul 1 16:04:17 gaia kernel: [87323.409076] res 40/00:f0:c8:58:ed/00:00:f3:02:00/40 Emask 0x1 (device error)
Jul 1 16:04:17 gaia kernel: [87323.411285] ata2.00: status: { DRDY }

A brief search revealed thisPine64 forum post, indicating that power might be the culprit. At this point, I didn’t have much to lose in trying, and it would eliminate one more potential source of the errors. The recommended power supply for the NAS case is rated at 12V and 5A, which should be plenty for 2 disks, the RockPro64 SBC and fan. I tested the output voltage of the power adapter with a multimeter and it read 12.3V, so the power supply appeared to be OK. For the next part of this test, I

  • Powered one drive (/dev/sda) with the power connector included with the NAS enclosure (12V 5A)
  • Powered the other disk with the SATA-to-USB adapter setup (the power adapter is physically separate from the USB connection)

My thinking was that this configuration would draw less current from the NAS power supply, potentially avoiding issues where the system wasn’t getting enough power, causing one disk to go offline. Even with this configuration, I still saw checksum errors and /dev/sdb became unresponsive to smartctl. So, this told me that it was less likely a power problem.

My Custom-built ZFS Kernel Module

Since the ZFS kernel module I installed wasn’t stock, I thought it would be worthwhile to try a stock kernel and prebuilt ZFS module. For this, I flashed another microSD card with the Armbian 20.04 Focal image, thinking there should be a kernel module and zfsutils-linux package. I got the image booted and tried installing the ZFS package, but it failed, complaining that the kernel module couldn’t be installed. Out of frustration, I took a simpler path. I decided to try creating the volumes using LVM and ext4. If i saw the same kernel errors, I knew the problem wasn’t isolated to the ZFS module. So, I set about creating physical and logical volumes. I wasn’t able to finish. During volume creation, the lvcreate command failed. I tried smartctl and /dev/sdb was unresponsive. The kernel log showed more of the now-familiar errors. OK. That ruled out my ZFS module.

SATA Controller

I had proven (to myself, at least) that none of the other components in the system were likely sources of the problems, so the only thing left to try was the SATA controller. The previously linked Pine64 forum post also mentioned some SATA control chips that were know to work with the RockPro64, so I found a reasonably priced card based on the Marvell 9215 chipset and ordered it.

When the new card arrived, I swapped out the card, powered up the NAS with the microSD card containing the ayufan 18.04 image and crossed my fingers. I recreated the ZFS mirror pool and began the NextCloud sync, monitoring the kernel log and the ZFS pool status. After a complete sync (about 3 days) and a zpool scrub…

No errors!

Well, that was painful. But, it was also informative and a lot of fun! Again, If you’re looking to build a simple NAS and your use case is similar to mine, I highly recommend the Pine64 NAS! But, don’t use an ASMedia asm1061 or asm1062-based SATA controller.

I Like Big Buttons

The Premise

In early 2018, my friend Steve had an idea.  His church supports missionaries, and for fundraisers, they sell lunches to the congregation.  In the past, they had volunteers prepare and serve the food.  It was a lot of effort, and Steve thought that it could be improved.  He thought that having a restaurant make the food would allow them to charge a bit more for the food, and might draw more attention.  He was right of course, and now he wanted a way to engage the participants even more.  He wanted some kind of visual progress indicator.  something that would get people excited about participating in supporting the cause.

“What if we had a big button that people could push when they buy a lunch, and then have a big display of how many lunches we’ve sold?”

That sounded like a cool idea to me.  So, I set out to build just that.

The Architecture

button box setup
If this is a smart TV with a browser, the laptop can be eliminated

I tried to make this as simple as possible. No, really I did!  However, the constraints of the environment dictated otherwise.  I thought the simplest thing would be to have a button attached to a Raspberry Pi, which would be attached to a screen via an HDMI cable.  However, the display needed to be relatively far away from where people would be paying for lunch, and running a long HDMI cable was not acceptable.  Steve mentioned that they had an extra laptop that they could connect to the TV, so this seemed like a reasonable solution.

After some noodling, I decided that I’d have a big button box with a Raspberry Pi inside. The Pi would host a web server that publishes a page with the count on it.  The laptop connected to the TV would access the web page and display it on the TV.

Is this the simplest design?  Not by a long shot.  Would it be fun to build?  Absolutely!  I also like the idea that this solution is self-contained.  If the TV has a browser, then the laptop can be removed from the setup.

The Button Box Design

large button
That is one big button!
lit button
Ooh! Shiny!

The hardest and most interesting part of this project to me was the button box.  I’d been itching for a project to use the laser cutter at our makerspace,  While I could have just used a tool to generate the dimensions of the box with appropriate notches for assembly, I decided it would be fun to figure it out on my own.  Here’s what I wound up with…

2D OpenSCAD layout
the 2D primitives in OpenSCAD were perfect for this project!

I made the first prototype with cardboard, which worked out perfectly!

Cardboard prototype
looks good so far!
cardboard prototype assembled
Everything fits with a reasonable amount of space for wiring

Happy with the results, I moved on to the 2nd prototype, which was made out of MDF scraps at the makerspace (I think I also got some hardboard in there, too).

MDF prototype bottom
All the holes lined up! The square plate on the lower left is the shutdown button.
detail on shutdown button
I mounted a momentary contact button upside down to serve as the shutdown button

Since this button box would be operated by non-technical folks, I decided I should add a shutdown feature to prevent damage to the SD card.  When the button is held for 2 seconds, a shutdown command is sent to the Pi.  Of course, nothing stops the user from just disconnecting power, but at least the capability is there.  Since the Pi doesn’t have any built-in capability to shutdown the OS, I added a momentary contact button to the bottom of the box to serve this purpose.

Everything was going great until I received my acrylic sheets.  I thought I was getting 3mm acrylic sheets, but I actually ordered 2mm acrylic sheets.  *sigh*.  But thankfully, I made the material thickness a variable in my OpenSCAD model, so I just needed to change that variable from a 3 to a 2, and adjust the placement of the power connector hole (that took a bunch of trial and error).

off my 1mm
PSA: 3mm does not equal 2mm

After the adjustments, I re-cut the box from the spare sheet of acrylic (always a good idea to get more than you absolutely need in case you make a mistake!).

final internal wiring
I really like the Perma-Proto hats from Adafruit!

To keep the wiring nice and clean, I used a Perma-Proto hat from Adafruit. I’ve used them in a few projects now and I’m hooked.  For the relatively small circuits I’m making, they’re perfect, and don’t add much to the footprint of the project.

finished box unlit
I like the way it turned out! Unfortunately, you can see along the top front edge where the acrylic cement leaked and marred the surface. A little acrylic cement goes a long way.
finished button box powered on and lit up
Who can resist a giant shiny button?

The RESTful Web App

I really enjoy projects that involve physical builds as well as software.  In the spirit of over-engineering things, I decided to make my counter a RESTful Web Service.  You can find the source code on bitbucket.  When the big button is pressed, python code sends an HTTP request to the web server, incrementing the stored count.  The web page showing the count updates every 1/2 second.  For the shutdown, holding the button on the bottom of the box sends another request that executes a graceful shutdown of the web server and the Raspberry Pi.

A Demonstration

Here is the whole thing in action!

Next Steps

  • Test it out in the target environment
  • I’d like to make the typeface fixed-width
  • There is code to set a target number.  I’d like to add some animation that is triggered when the goal is reached

The Rickrolling Toilet

The Idea

In 2015, my church decided to revive their Fun Fair event.  They encouraged folks to take a look at some of the old games and refurbish them if they were interested.  I found an old game called “Dunk It”.  It consisted of a toilet seat bolted to a green wooden frame.  The goal was to toss a roll of toilet paper through the hole of the toilet seat to win a prize.

My inner 5yo. thought this would be my game.  But, I couldn’t just leave it as is.  I needed to make some… improvements.  The premise of the game was simple, but it lacked something.  There was no feedback for the player.  So, I set out to give it a voice.

My first inclination was to use an Arduino with an audio shield.  However, I was trying to minimize the cost, so I thought I’d use something I  already had on hand, a Rapsberry Pi A+.  This would handle the audio, and the programming would be relatively simple.

Alpha/Beta Version

To prove the concept, I thought I would wire up a light sensor (Light-dependent Resistor) to the Raspberry Pi.  This was not as straightforward as I’d hoped, since LDRs are analog devices and the RPi only has digital I/O pins.  So, after a bit of searching, I found a technique to read analog values from a digital I/O pin using a simple circuit.

Once I got this wired up, I stole some code to make the LDR reading work.  I combined that with some code to play back audio and the alpha version was working…

Version 1.0

I foolishly ignored the advice to always put a resistor in series with an LED.  And it bit me in the behind.  An hour before the fun fair, the LED blew out, and I wasn’t able to leave and get a replacement.  Lesson learned.

Version 2.0

For Version 2.0, I wanted to make some significant improvements.  First, I wanted to make a more realistic “experience” for the players.  I was able to secure a toilet on freecycle (I love freecycle).  After sanitizing it, I set to work wiring up the LED in the bottom of the bowl.  This had the added benefit of making the code simpler.

I also wanted to improve the sound for the game, so I got an inexpensive power amp and car speakers.  I mounted the speakers on the snazzy new platform with locking casters.  This made the whole thing much easier to move around.

Version 3.0

For version 3.0, I wanted to clean things up significantly, and make it easier to assemble/disassemble.  I mounted the amp and Raspberry Pi on a board, that attaches to the inside of the tank with Velcro.  I also used a connector for the sensor instead of being wired to the board.

all components mounted to a single board
I wired up the circuit on a proto hat and added a connector for the LDR so that it was easier to connect/disconnect

Debug mode with monitor and keyboard
What toilet wouldn’t benefit from a debug monitor and keyboard?

Version 3.1

I added a shutdown button for the system so the SD card doesn’t get trashed, and the Pi shuts down gracefully. and covered the wires with some flexible split tubing.  I also added some code to avoid repeating the audio if the sensor detected darkness at the end of the previous audio playback.  Finally, I added some other… apropos sounds.

I bet this is the first time the words “graceful” and “toilet” appear in the same context.

Shutdown button on the Pi Hat
I added a shutdown button. This puts my mind at ease by gracefully shutting down the Pi.

Possible Improvements:

  • I’d like to use the flush handle to perform the shutdown/recalibration


the garden of nerdy delight

<This is an old draft I just finished. Oy!>
We just received a brand new Foundry FastIron Gigabit switch at work. It’s to be used in our new network-based editing configuration. I’m testing it out to see if it will handle 8 simultaneous video captures. Here’s the boot message from this bad boy:

FGS Boot Code Version 02.4.00
Enter ‘b’ to stop at boot …
BOOT INFO: load monitor from primary, size = 84371
BOOT INFO: load image from primary…….
BOOT INFO: bootparam at 00049268, mp_flash_size = 001dc352
BOOT INFO: code decompression completed
BOOT INFO: branch to 00400100
Starting Main Task …
Parsing Config Data …

FSecure SSH is included in this product

SW: Version 02.4.00cT7e1 Copyright (c) 1996-2006 Foundry Networks, Inc.
Compiled on Sep 06 2006 at 16:40:46 labeled as FGS02400c
(1950546 bytes) from Primary fgs02400c.bin
BootROM: Version 02.4.00T7e5 (FEv2)
HW: Stackable FGS624P
Serial #: CH42060189
P-ASIC 0: type D804, rev 01
400 MHz Power PC processor 8245 (version 129/1014) 66 MHz bus
512 KB boot flash memory
8192 KB code flash memory
The system uptime is 2 seconds
The system : started=cold start

FGS624P Switch>
Power supply 1 detected and up.
PowerPC, eh? Pretty slick. It must be one of those embedded jobbies from FreeScale. With 128 MB of RAM, this is a pretty serious switch.


in memoriam green iPod mini

Green iPod mini

After several months of searching high and low, I’ve given up. I’m beginning to accept the fact that my green iPod mini is lost. I’ve looked everywhere, and have racked my brain trying to remember where I last saw it or used it. The odd thing is that I still have the docking cable. I always took the cable with me if I was going to be away from home for more than a normal day.

I think the thing I’m saddest about is the sentimental value of greenie. A2C gave me greenie when we were dating. I remember it was just before we got engaged. She had just returned from a trip overseas. We were hanging out in her room, and were catching up and talking about things. I told her I loved her, and she replied as she always does with, “You do?”. Only this time, she continued, “Oh good, then I have something for you!”. She jumped up and began digging through her closet. In a few moments, she produced a new iPod mini and a protective case. I was floored, as it was a pretty extravagant gift (to me, anyway).

If I were to pick an iPod myself, it would have been green. It was so nice to have something I’d use all the time to remind me of A2C (not that I needed reminding or anything). I got lots of good years out of that iPod, and I must say, I think the minis are still my favorites. I’ve not tried out the nano (older or newer), and quite frankly, I’m not all that interested. Half due to the fact that I read more than listen on my daily commute these days, and half because I’m not so enamored of all things iPod as I once was. That being said, I’m slowly becoming more and more interested in the iPhone. Must… resist… temptation… shiny…


holy small form factor, batman…

Back in 2002 or so, I became fascinated with small form factor computers. Those of you (okay, *both* of you) who have followed my blog for any time have probably noted this fascination.

There’s something about the shrinking size of all that computing power that really appeals to my geek-ness. I think it’s the possibilities that are opened up for putting powerful computers into more and more everyday things, like toasters, cars, appliances, etc. Or, it could be that the shrinking size of my living space has necessitated replacing all those huge, power-guzzling, noisy boxen of yore with svelte, silent, cool-running machines of the future.

Of course, Apple’s Mac mini is on my wishlist of small computers to add to my collection of small powerhouses, but today I read an article about the new pico-itx form factor designed by VIA, the leader in small form factor design. Prior to the availability of the Mac mini, VIA were the ones to watch in the SFF arena. The mini brought more features, slightly smaller size, and impeccable style to the table. To be honest, the offerings in SFF with regards to visual impact are pretty sorely lacking. I have found that Casetronic has the most attractive cases in the market, aside from Apple.

comparison image shamelessly linked from mini-itx.comSo the new Pico-ITX form factor reference design is targeted to consume about 1Watt of power under normal usage! Pretty amazing. This combined with RoHS compliance, I believe, are helping to push the industry toward lower and lower power consumption and better environmental impact.
Anyway, enough of my rambling.

saying goodbye to an old friend…

right side viewBack in my college days (seems like ages ago now), I assembled my first PC. Ah, the memories. I hand-picked my components, trying to achieve the perfect balance of cheap, good, and fast. I had selected a motherboard and CPU combo with a 486 DX2-66MHZ processor, and 2 VLB slots and 6 ISA slots. I put in my 4 MB RAM from my first computer and lived with that for a while until I bit the bullet and added 4 MB and then another 4MB, eventually having 12MB RAM. I got a SoundBlaster AWE-32 which, at the time, was *the* top-of-the-line sound card. I also got a 2X Sony CDROM drive so I could play Myst. For the graphics card, I selected a 1MB VRAM STB Powergraph 24. 640×480@24-bit color, baby (my 14″ IBM monitor couldn’t handle resolutions above 640×480). For the communications (had to get on the ‘net!) I chose a Zoltrix 14400 model. This was a *huge* step up from my measly 2400baud modem that came with my PS/1.

I also purchased a 1GB Seagate ST31220A Enhanced IDE hard drive. A mere 3 months before, it debuted at $1000. I picked up that bad-boy for a mere $500 and change. What a deal! With it, I had to get a VLB disk controller, since the mobo didn’t have a built-in HDD controller. The last piece I needed was a case to stuff all these totally ‘leet components into.

On the way home from a somewhat long night of revelling, I stumbled past the local computer shop and beheld the most interesting case I’d seen to date. The design was interesting, but not overdone, and it seemed large enough to house the many upgrades I envisioned for my perfect machine.

3/4 front view

This case lasted me several years and many fun upgrades. And, many sleepless nights fearing that I’d completely screwed something up. I remember playing Doom with an audio CD playing in the background (Usually Smashing Pumpkins). This required a TSR (terminate and stay resident) driver for the CDROM drive. Oh yes, did I mention the not-so-leet Labtec speakers? “Tiny and Tinny” was their claim to fame but for the college dorm, they were enough.

Its first rejuvenation was the move to a Pentium 100 on an Asus P/I-P55TP4XE with upgradeable pipeline burst cache (which I bumped from 256KB to 512KB). This beast had 24MB of RAM, an MPEG-1 Decoder card (the docs were all in Korean, but hey, it was free). I also splurged on a 17″ Sony CPD-17SF-II Trinitron monitor which was driven by the amazing, all-powerful Number9 Imagine 128 4MB VRAM PCI graphics engine. 24-bit color at 1152×864. I also decided to get into digital artwork and purchased a Wacom 4″x5″ tablet. Sweet. I decided to upgrade the mouse to a Logitech Mouseman Serial 3-button mouse. Count ’em and weep! I also bumped up my keyboard to a Microsoft Natural (the big honkin’ original, not the Elite). It was on this machine that I first installed Solaris x86 and Linux (Redhat, then Slackware).

left side viewEventually, I upgraded, as the case had the worst arrangement for the drive cage. Any maintenance required disassembling the entire drive subsystem, which was a lot of work back in those days. I kept the old case lying around, and eventually revived it with old parts salvaged from discarded PCs (I believe it was a Pentium in the 150MHz range), and finally, I purchased overstock components and it lived out its last days as a 1GHz Celeron with 256MB RAM on an Epox baby AT motherboard (they actually made baby AT motherboards after the turn of the century!) The mobo/CPU/RAM live on in my current linux box, but that’s another story. As you can see, it also acquired a mass of stickers. The kana on the left side is the hiragana for my english name. I lovingly glued, X-acto knife’d out the excess, and taped over it with clear packing tape to preserve it for all time.

So, It is with misty eyes and a heart swelling with emotion that I bid a fond farewell to a trusty but finicky old friend. You will be missed.

the last place you’d think to look…

Way back in tha halcyon days of the late 90’s when USB was young, and the women were glad of it, I decided to go all USB for my peripherals. At the time, it was hard to find a BIOS that would use a USB keyboard to boot from. Over the years, I’ve chewed through a few USB hubs, and several mice, keyboards, scanners and printers.

Recently, I decided I wanted a full-time Linux box around, and I wanted to share my monitor, keyboard and mouse between it and my Windows box. Now, you may be thinking, “why, oh why would you not just buy one of those cheap-as-dirt KVM switches that are on the market today, that even include cables?”.

Of course the answer to that is that I’m cheap. I already have the video switching capability in my monitor, so I just need a solution for the K and M portions. I hooked up the keyboard and mouse to the USB hub, and then have 2 wires that rest near the hub. One connects the hub to the Windows box, the other to the Linux box. This way, I need only reach a few inches, pull a plug, and push in another. Pretty decent solution, right? right?

Well, for some reason, having the mouse connected via the hub causes it to lose communication with the computer to which it is attached. This was not news to me, I had observed this behavior many times in the past. At the time, the solution was to connect the mouse directly to the PC. This is no good in this scenario for obvious reasons. The mouse is a basic Logitech First Mouse with a wheel. The keyboard is the Microsoft Natural Elite (though I have nothing nice to say about their software, their hardware is pretty decent. Not that they actually make the stuff.) Anyway, I noticed that after a few moments of inactivity, the mouse would no longer move. If I reinserted the mouse’s USB connector, all was good until another period of inactivity. This is absolutely unlivable. No way this is flying. What’s the deal? Maybe the hub sucks? It’s an unpowered hub, but it can be plugged in if the power requirements so dictate.

Of course, I googled the problem. I noticed a lot of posts regarding bad mouse drivers for various mice, and a lot of similar inquiries regarding the hub. Then it dawned on me. Could it be the mouse itself? To test the theory, I turned to my trusty Drawer of Many Things™ From its depths, I pulled a Microsoft USB mouse. I plugged it in, and there were no issues with loss of communication via the hub. Odd. All these years, I never knew, and just suffered with the crappy mouse. I don’t prefer the MS mouse, but in this case, I’ll just have to deal. I’d rather have a system that works reliably, than have a nice mouse that doesn’t work so well. *sigh*


cable guy

So a few weeks or so after the wedding, I walk into the computer room at home, and notice that my Windows machine (which had been running for a few months with no issuues) is at the BIOS screen. “okay”, I think, “something small has happened and I’ll just reboot.” Uh-uh. The SCSI controller is complaining that it can’t make its wide negotiation with one of my drives (the 80GB 10,000RPM guy). Well, it says to check the cables, but I hope that’s really the problem. So, I never get around to fixing it because I’m so busy getting the rest of the house in order, living the married life, and working.

So finally, last night, I decide to take a look and see what the problem really is. I disconnect the cables from both hard drives and the adapter doesn’t complain. Good. At least that somewhat eliminates the adapter as the problem. Now, I hope it isn’t the drives. So, I reconnect the drive that wasn’t having issues, and the controller complains that there’s a termiation problem and that I should check the cable. Great, this is pointing more and more to actually being a cable issue. so, I replace the cable with another SCSI cable (happen to have 3-4 68-pin SCSI cables lying around) and voila! It works! no complaints from the controller, so I think that was the issue. A huge sigh of relief 🙂


iPod mini annoyance: fixed

I love my iPod mini. The one annoyance I had with it was that when it wakes from a deep sleep (not used for 36 hrs), it loses some settings. Primarily, the one that annoyed me was the cliker settings. I generally set it to off, to save power, and so it doesn’t, well…, click. There were also issues about the main menu items and all that, but it didn’t bother me so much.

As of the newest firmware (1.4), they have fixed this issue. Thank goodness! Unfortunately, they broke the Smart Playlist feature. You see, the Smart Playlists on the iPod would update if you changed some attribute (typically the rating) of a song on the iPod. Now the feature only works while you’re in iTunes. This is supposed to be fixed.

John Gruber has some interesting views on the podcasting phenomenon, and why Apple had to release new firmware and iTunes.