Merge the real site metadata into this repo

This commit is contained in:
Nick Pegg 2017-10-14 10:41:31 -07:00
parent 02a17e1465
commit 55610cb0ba
62 changed files with 1989 additions and 1 deletions

1
_media Symbolic link
View file

@ -0,0 +1 @@
public/media

10
_pages/about.yaml Executable file
View file

@ -0,0 +1,10 @@
---
title: About me
url: /about/
slug: about
---
This about page was incredibly out-of-date, so I removed it. If you want to know who I am or what I'm up to, check out these links:
* [Twitter](https://twitter.com/nickpegg)
* [GitHub](https://github.com/nickpegg)

73
_pages/projects.yaml Normal file
View file

@ -0,0 +1,73 @@
---
title: Projects
url: /projects/
---
Here's a decent number of the projects I've worked on in the past.
**This is mostly ancient and you should probably check out my [GitHub](https://github.com/nickpegg) profile instead**
### Posty (2010)
Links: [GitHub](https://github.com/nickpegg/posty)
Just a little static page generator I wrote when I got sick of using
Wordpress. It's quick, it's dirty, but it does what I need it to.
I guess I'm just never satisfied with using pre-made software packages
to run my personal website.
### Beertraq (2009)
Links: [Website](https://beertraq.com/)
A website to keep track of which beers you've tried, compare with others,
read their reviews, and discover new beer. Started in the summer of 2009,
inspired by The Flying Saucer's UFO Club.
### Intelligent Drink Dispenser (2009)
(No code available, sorry)
This is my Computer Engineering Senior Design project at the University of
Missouri-Rolla. It's basically a robotic bartender which keeps track of
customers (via RFID) and their purchases. This was an idea that Richard Allen
and I have been kicking around for a few years, but it's finally come to life.
### Nick Tracker (Python, Java) (2008)
(No code available, sorry)
Keeps track of where my phone's at, which is usually where I am. Server side
script written in Python, client written in Java for the Android phone platform.
Since I've written this, two better applications have hit the Android Market,
including Google Latitude. I've stopped work on this because I don't feel like
re-inventing the wheel.
### CPU Usage Meter (2006)
Links: [Project page](/cpu-usage-meter/), [Linux source](/media/projects/cpu_meter.tar.gz)
LEDs on the front of my computer case displaying the CPU load.
### Serial IR Receiver (2006)
Links: [Project page](/ir-receiver/)
A simple serial-based LIRC-compatible IR reciever.
### ServCheck (PHP)
Links: [servCheck.tar.gz](/media/projects/servCheck.tar.gz)
A simple service checker written in PHP. Attempts to open a socket with the
configured hosts and ports, and outputs an HTML file showing which services
are up and down. I originally wrote this for the TerminalUnix site to show what's
working and what isn't.
### TerminalUnix (PHP)
A PHP and MySQL driven site, functioning as a web front-end and community site for
the TerminalUnix server. I started it because I was sick of all of these Content
Management Systems having features that I didn't want. I sat down during Spring Break
of 2006 and coded a PHP login system, not knowing about the wonders of some of the PHP
features. Eventually I coded nice things in such as MySQL access (instead of a flat
text file), a user adminstration system, and even a news sytem.
Unfortunately I don't plan on releasing the source code since it's a big hard-coded mess.
### N-Queens solver (C++)
Links: [nqueens.tar.gz](/media/projects/nqueens.tar.gz)
Another Data Structures homework assignment. This solves (brute-forces) the
[N-Queens Problem](https://www.google.com/search?q=n-queens+problem) using recursion and backtracking.

View file

@ -0,0 +1,174 @@
---
title: Kegerator
url: /projects/kegerator/
parent: Projects
---
I've been homebrewing for a couple of years now, and my least favorite part of
the whole process is definitely the bottling. Each 5 gallon batch has
approximately 55 bottles that you have to clean, santize, fill, cap, clean
again, and put in boxes. I've gotten sick and tired of doing that for every
batch of beer, so I decided to make the jump and build myself a kegerator.
![Kegerator mostly finished][0]
Building a kegerator is fairly simple, only requiring some plumbing and
woodworking. The only hard part is the cost. Below is the cost for a
three-keg setup similar to my current two-keg setup.
### Updates
#### Feb 20, 2011
Got the kegerator built yesterday minus a temperature controller. I got a
little drill-happy and accidentally made three faucet holes instead of two.
Oops, I guess I'll have to put in that third faucet.
### Links
* [Flickr set](https://www.flickr.com/photos/nickpegg/sets/72157625971333921/)
### Bill of Materials
<table border="1">
<tr>
<th>Qty</th>
<th>Cost Each</th>
<th>Item</th>
</tr>
<tr>
<td>1</th>
<td>$198</td>
<td>GE 7.0 cubic ft freezer</td>
</tr>
<tr>
<td>1</td>
<td>$90</td>
<td>5 pound CO2 tank</td>
</tr>
<tr>
<td>1</td>
<td>$75</td>
<td>Dual gauge CO2 regulator</td>
</tr>
<tr>
<td>1</td>
<td>$47</td>
<td>3-way CO2 distributor</td>
<tr>
<td>3</td>
<td>$40</td>
<td>Used 5 gallon soda keg</td>
</tr>
<tr>
<td>3</td>
<td>$6.50</td>
<td>Ball lock gas disconnect - MFL</td>
</tr>
<tr>
<td>3</td>
<td>$6.50</td>
<td>Ball lock liquid disconnect - MFL</td>
</tr>
<tr>
<td>7</td>
<td>$1.30</td>
<td>1/4" barb-to-MFL connector</td>
</tr>
<tr>
<td>7</td>
<td>$0.25</td>
<td>Flared nylon washers for MFL connections</td>
<tr>
<td>3</td>
<td>$20</td>
<td>Stainless steel faucet shank</td>
</tr>
<tr>
<td>3</td>
<td>$2.25</td>
<td>1/4" barbed shank tail piece and hex nut</td>
</tr>
<tr>
<td>3</td>
<td>$0.10</td>
<td>Rubber shank washer</td>
</tr>
<tr>
<td>3</td>
<td>$31.50</td>
<td>Perlick beer faucet</td>
</tr>
<tr>
<td>3</td>
<td>$2</td>
<td>Economy tap handle</td>
</tr>
<tr>
<td>1</td>
<td>$6.42</td>
<td>12' 2x8</td>
</tr>
<tr>
<td>1</td>
<td>$3.37</td>
<td>Roll of weather stripping</td>
</tr>
<tr>
<td>2</td>
<td>$5.65</td>
<td>25' roll of poly ice maker tubing</td>
</tr>
<tr>
<td>14</td>
<td>$0.65</td>
<td>1/4" to 1/2" hose clamp</td>
</tr>
<tr>
<th align="left">Total</th>
<th align="left">$758.09</th>
<td></td>
</tr>
</table>
### Construction
By far the most popular way to build a kegerator outside of complete
fabrication is to take an existing chest freezer and add a collar between the
freezer and the original lid. This is what I did as seen below.
![Hinges][1]
An added bonus to the collar method is that you now have a wooden platform to
add your faucets and other items without harming the original freezer, in case
you decide to sell it later or actually use it for storing food. When building
the collar, you'll want to use something like a 2x8 to have enough clearance
for the old hinges to attach to the wood.
To help keep the cold air in the kegerator, it's a good idea to seal the
collar. I just put some weather stripping down where the collar rests on the
freezer and filled the collar joints with some extra oil pan sealant I had
laying around. The weather stripping is nice because if you're not quite a
master woodworker and can't be bothered to get the collar exactly square, it
helps fill in your gaps.
Once the collar's on and the faucets are installed, it's just a matter of
connecting everything. Don't forget to use your washers to get a good seal! You
can hand-tighten the MFL connections, but it might be a better idea to tighten
them with a pair of pliers.
### Pressure Testing
Once everything's together, you'll want to pressure test the entire dispensing
system. What I did was fill the kegs with water and pressurize the whole system,
and do some test pours. If you have any leaks, you'll either hear air hissing
or see water leaking out.
![Pressure testing][2]
In the case that it is an air leak and you're not
sure where it's coming from, disconnect things one by one until the
hissing stops. The part that you last disconnected is the faulty one, so make
sure that everything's tight on there, especially if it's a hose clamp on a
barbed connection.
[0]: https://farm6.static.flickr.com/5135/5462665976_d11faea2aa.jpg "Mostly finished"
[1]: https://farm6.static.flickr.com/5132/5462060669_436dbe852f.jpg
[2]: https://farm6.static.flickr.com/5053/5462065415_efefbb9675.jpg "Pressure testing"

View file

@ -0,0 +1,84 @@
---
title: CPU Usage Meter
url: /projects/cpu-usage-meter/
parent: Projects
---
Back in the day, there was a little obscure operating system called BeOS. The
company which made the OS was brave enough to put it on their own hardware, too.
This was dubbed the [BeBox](https://en.wikipedia.org/wiki/Bebox). Among all the
neat doohickies on the computer were two CPU load meters (one for each processor).
Now, how cool would it be to have a computer with those?
__Very.__
### Download
Note: Linux code makes use of libserial and libstatgrab. Both must be installed for
program to work/compile. The UM245R device uses the ftdi_sio driver. It's in the
2.6 kernel tree, so it should (hopefully) be detected when you plug the device in.
USB controller program:
* [Linux code and executable](/media/projects/cpu_meter.tar.gz)
* Windows code removed due to buginess
### Updates
#### Status Update - Nov 27 2007
Wow, it's been almost a year since I've put work on this. I decided to finally write
code to make it work under Linux. I ended up throwing out the notion of trying to
use libusb. After 4 hours of research and code hacking, it worked!
#### Status Update - Dec 08 2006
The hardware works and the software works (kinda). Once I get around to cleaning up
some of the code and adding some documentation, I'll start uploading stuff. Stay tuned!
#### Status Update - Dec 02 2006
Got the hardware working on a breadboard. Using DLP's test program, various patterns
were able to be sent. Video coming soon!
### Statistics:
* Cost: $27.69
* Lines of code (Linux): 83
* Lines of code (Windows): 1503
* Sleep lost: Unknown
### Hardware Design
The hardware is pretty simple. Using an UM245R, most of the work is done for you. The
[UM245R](https://www.ftdichip.com/Products/EvaluationKits/UM245R.htm) takes in USB data
and outputs it on the 8 data pins, and those 8 pins directly drive the LEDs. It's not
quite as simple as that, since there's all sorts of protocol with Ready-to-Read and
Ready-to-Write and Read and Write pins that go high and low. I just cheated and used
at 555 timer to generate a clock signal on the RD pin to give me the data. I just
lucked out and the UM245R outputs the last data if there's no new data available.
[Circuit Design](/media/img/cpu_meter/circuit.png)
[Testing the circuit](/media/img/cpu_meter/testing.jpg)
### Software
As with any hardware, there needs to be software which controls it. For the Windows
code, I decided to use [LibUSB](https://libusb.sf.net/) to help me with this project. Programming with LibUSB
is fairly straightforward, which helps since the documentation is rather spotty. Along
with LibUSB, I also took the Queue class from [nicklib](/projects/) and wrote a UsbDevice class to
help handle failures better. This stuff can be found in the source package above.
After wrangling with libusb on the Windows side of things, I decided to throw out
that idea on the Linux client. It turns out that FTDI makes a driver for the UM245R
called ftdi_sio which creates a virtual serial interface. I used this along with
libserial and libstatgrab to get it working.
The Linux code is rather simple and only does CPU usage. I'm planning on extending
it to do things such as music visualization. This of course means writing some sort
of user interface and probably using threads.
### Pictures
Some pictures of the final product:
![Final Product](/media/img/cpu_meter/1.jpg) ![Final Product](/media/img/cpu_meter/2.jpg)
![Final Product](/media/img/cpu_meter/3.jpg) ![Final Product](/media/img/cpu_meter/4.jpg)
![Final Product](/media/img/cpu_meter/5.jpg)

View file

@ -0,0 +1,50 @@
---
title: LIRC IR Receiver
slug: ir-receiver
url: /projects/ir-receiver/
parent: Projects
---
Since my computer was used as a media hub for my two roommates and I my Sophmore
year at UMR, I figured being able to use my A/V receiver's remote control to
control the computer would be nice. After doing some quick research, (Win)LIRC
seemed to be the best solution. Many people either buy a pre-made receiver, or
build their own receiver that fits on the end of a serial cable.
### Parts List
* [A suitable IR receiver module](http://lirc.org/receivers.html) (Vishay 1738 is popular)
* 1N4148 diode
* 4.7 uF capacitor
* 4.7 kOhm resistor
* 7805 5V voltage regulator
### Construction
Construction is fairly easy. I personally used a Vishay TSOP2238 IR receiver
module and a [instructions](http://lnx.manoweb.com/lirc/) found freely on the web. Below is the circuit diagram
that I used. In the schematic, the - pin (GND) on the IR receiver goes to the serial
GND, the +/Vs pin receives the +5V from the voltage regulator, and the Data pin is
connected to the DCD on the serial port. Beware that many IR receivers have different
pinouts! More details on what does what can be found at the [LIRC guide](http://lirc.org/receivers.html). Once the
receiver is complete, the easiest way to position it is to connect it to a serial
extension cable and mount it somewhere.
Schematic:
### Usage
Since I primarily used Windows for day-to-day tasks back then, I used [WinLIRC](http://winlirc.sf.net/) to
handle the receiver. It's configuration files are identical to LIRC. Pre-made
configuration files are available for a [wide range of remotes](http://lirc.org/receivers.html), but your remote can be
programmed manually if it needs to be.
My two main media players were [Winamp](http://winamp.com/) and [Media Player Classic](http://sourceforge.net/project/showfiles.php?group_id=82303&package_id=84358), which both have some
sort of support for LIRC. Media Player Classic has it built-in (configuration in the
Keys options), but Winamp requires a plugin. Luckily, there is one available on the
[WinLIRC website](http://winlirc.sf.net/).
### Links
* [LIRC](http://lirc.org/)
* [WinLIRC - A Windows port of LIRC](http://winlirc.sf.net/)
* [List of known working IR modules](http://lirc.org/receivers.html)
* [Nice construction instructions](http://lnx.manoweb.com/lirc/)

View file

@ -0,0 +1,21 @@
date: 2009-04-15
tags: []
title: New Website
---
So, I've decided to redo my website again. My old one was pretty bland and
updating stuff was a relatively a pain. Plus, I seem to be doing more interesting
things and having more interesting thoughts (hopefully) than I have in the past
couple of years. Of course, I am a Computer Engineer (aka nerd), so don't expect
to see anything like me pondering the wonders of the cosmos here. I'll probably
mostly post updates on projects I'm fiddling with, my struggles with technology,
and what I've been up to.
---
As you can see, I decided to go with WordPress. I used it a long time ago, but
after seeing my roommate [Ben Murrell](http://benmurrell.com/) use it on the [Missouri S&T ACM site](http://acm.mst.edu/)
and seeing what it was capable of, I went for it. Probably the most important
thing for me was the ability to have static pages and link to them in a reasonable
fashion (as you can see along the top and right sidebar). It just does everything
I want to do without too much fuss, and it looks pretty while it does it.
I'm still moving stuff from my old website and updating information, but feel
free to poke around in the meantime.

View file

@ -0,0 +1,26 @@
date: 2009-04-17
tags:
- servers
title: New Server
---
Thanks to a generous donation from [Richard Allen](http://rsaxvc.net/), I now have a new server.
---
![Compaq Proliant DL360](/media/img/compaq.jpg)
It's a Compaq Proliant DL360. From what I can tell of it's past, a place where another
friend of mine was working was getting rid of their old hardware and he snagged a bunch.
Richard got some of the servers, had no use for this one, and then gave it to me a couple
of days ago.
It's probably a first generation DL360, but that doesn't mean it sucks. It's got dual
Pentium IIIs running at 1.266 GHz each, 512 MB of RAM, and two 18.3 GB SCSI drives running
in RAID1. It's quite an upgrade from my current server, which is dual P3 450 MHz with a
little less than 512 MB of RAM. The best part of the whole thing, though, is the Remote
Insight Lights-Out Edition II card that came with it. It's a PCI-X card that redirects
video, keyboard, and mouse from the system and supplies it to a web interface. This means
that no matter where I am, I can get a physical terminal to my server. Plus, if I plug an
external power supply to it, I can even turn on my server remotely!
I'll probably be moving everything from my current server over to the new one. Since I'm
mirroring and then cutting over, there shouldn't be any problems with any of the services
that my box supplies.

View file

@ -0,0 +1,71 @@
date: 2009-04-18
tags:
- servers
- linux
title: PXE boot with DD-WRT and Ubuntu
---
After spending all afternoon fighting with my new server and my DD-WRT router,
I finally figured out how to get my server to PXE boot and fire up an Ubuntu
install. All it really involved was setting up TFTP on another box (my desktop,
to be specific), adding a line to DD-WRT's DNSMasq options, and configuring the
damn server to boot from PXE, which was the hardest part. Luckily, for those of
you who are struggling with it, here's how I did it.
---
### Setting up the PXE client
I had to get my server to boot PXE in the first place. For most people, this just
means poking around in the BIOS. Not for me though.
After poking around the HP site, I've found out that my server is a first
generation Proliant DL360. Since it's an older machine, this means that it doesn't
have a built-in BIOS config, but I had to actually download the old Compaq SmartStart
5.5 CD. I had to hunt around the HP website, but to save you the trouble, you can snag it here:
[http://ftp.hp.com/pub/products/servers/supportsoftware/ZIP/smartstart-5.50-0.zip](http://ftp.hp.com/pub/products/servers/supportsoftware/ZIP/smartstart-5.50-0.zip)
Once you boot from the CD, you'll want to go into the System Configuration
Utility when prompted. From there, it's just like a giant BIOS. Just turn PXE
on for whatever ethernet port you're using and it's rarin' to go.
### Setting up the TFTP server
Once my server was setup for PXE booting, I had to set up a tftp server for it
to grab the boot image from. Since I was using my desktop, which runs Ubuntu, as
a host, setup was pretty easy. I just used tftpd-hpa per the Ubuntu wiki's recommendation.
# sudo aptitude install tftpd-hpa
I had to also edit the configuration file at /etc/default/tftpd-hpa. Mine looks like this:
#Defaults for tftpd-hpa
RUN_DAEMON="yes"
OPTIONS="-l -s /var/lib/tftpboot"
Since I was wanting to PXE boot into an Ubuntu install, I had to extract the
install files into /var/lib/tftpboot as I put in the config file. For example, the
netboot image files for Ubuntu 9.04 can be found here:
[http://archive.ubuntu.com/ubuntu/dists/jaunty/main/installer-i386/current/images/netboot/netboot.tar.gz](http://archive.ubuntu.com/ubuntu/dists/jaunty/main/installer-i386/current/images/netboot/netboot.tar.gz)
### Setting up the the DHCP server
DD-WRT uses dnsmasq for DHCP, so if you have a system which uses it too it shouldn't
be too much different to setup. Watch out, though! I initially screwed up my configuration
which really messed with my router.
All you have to do is add a line to the Additional DNSMasq Options found under the Services
tab. If you're running plain dnsmasq, just add the line to your dnsmasq.conf file. The line
goes a little something like this:
dhcp-boot=pxelinux.0,mybox,10.0.0.100
where pxelinux.0 is the file to boot, mybox is the hostname of the tftp server, and 10.0.0.100
is the IP address of the tftp server. You could probably get away with only specifying the
hostname or just leaving it blank and supplying the IP address. You can also get more fancy
and send certain boot images to certain machines, etc. This way works just fine on a home
network like mine.
Once you get this all setup, any machines that try to PXE boot will receive the image and
boot to it. If you used the Ubuntu install image like I did, you'll be able to install Ubuntu
on any PXE-capable machine or even boot into a rescue shell! Just remember that if you can't
setup a boot order (like my Proliant) make sure to disable the PXE boot in dnsmasq before rebooting.

View file

@ -0,0 +1,26 @@
date: 2009-05-11
tags:
- idd
title: Intelligent Drink Dispenser in the news!
---
So, I've been pretty quiet about my Intelligent Drink Dispenser project so far,
mostly because there was going to be a competition between myself and
[Clint Rutkas](http://betterthaneveryone.com/archive/2009/04/11/856.aspx) and
I didn't want to give the enemy any details.
---
Well, the cat's out of the bag: [http://news.mst.edu/2009/05/students_create_smart_way_to_m.html](http://news.mst.edu/2009/05/students_create_smart_way_to_m.html)
The communications department of my university, Missouri S&T (formerly UMR), gets
alerts from Google News whenever someone mentions the university's name and since
Clint did just that, his post showed up in their email. After talking with the
director of communications, he decided to run the story. I gotta say, it's a
pretty great way for my Senior Design class to wind down.
For those interested in the details of the Intelligent Drink Dispenser, stay
tuned! I'll be posting more information about it soon.
Side note: since the story got posted, my server's been chugging along to serve
up my website, almost maxing out the upload on my poor cable internet connection.
Here's the traffic graph from my router:
![Traffic graph](/media/img/idd_hardware/traffic_graph.png)

View file

@ -0,0 +1,59 @@
date: 2009-06-21
title: Intelligent Drink Dispenser Details
---
I said that I'd post details on my Intelligent Drink Dispenser project "soon". That
was over a month ago. Whoops. I blame my [new internship](http://nucoryamato.com/) for that.
For those of you not in the know, the Intelligent Drink Dispenser was my senior design
project at Missouri University of Science and Technology (which will forever in my heart
be University of Missouri-Rolla). It's basically a smart drink dispenser that's capable
of mixing, charging customers, telling the bar/restaurant owner when they need to refill
the machine, etc.
---
If you don't feel like reading the details and just want to look at the pretty pictures,
you can check out my [Picasa album](http://picasaweb.google.com/nick.pegg/IntelligentDrinkDispenser)
or watch the [Youtube video](http://www.youtube.com/watch?v=79H5oAS_Y6k).
The theoretical process is that the customer would go to order a drink, and since they're
a new customer, they'd have to be entered into the system by the person running the machine
(bartender, or waiting staff). They would have their name and credit card information taken,
and would then be assigned a drinking vessel based on the first drink they were wanting to
order. Multiple vessels could also be assigned to the same person. Once the customer is
setup and is ready to purchase their drink, they set the fluid vessel on the marked reader
area on the dispenser. The system then recognizes the vessel, who it belongs to, asks the
customer for the last four digits of their credit card, and then asks the customer which
drink they'd like to order. The customer then chooses what drink they'd like to have, the
system double-checks that the vessel is the right size, and pours it.
Security and privacy was one of the major goals of the project. The only information stored
about the user is their name, a secure hash of their credit card number, the last four digits
of ther card, and their drink order history.
The project itself can pretty much be split into two major components: hardware and software.
I would probably say the hardware is more interesting and posed more challenges for us. The
first thing is how the heck do you pour the fluid? If you take into consideration that we only
had a $300 budget for the whole shebang, it's not an easy task. The way that the professionals
do it is with Carbon Dioxide-powered pumps, which are controlled by electronic valves and supplied
by a tank and pressure regulator. Three pumps, valves, and the feed system would cost us well
over $300. Our original idea was to use 24 VDC sprinkler valves, but that idea failed because the
sprinkler valves by their nature require back-pressure to operate. We came up with the idea of using
windshield washer pumps made for cars. Since this was supposed to be a prototype, we didn't have to
worry about our components being food-grade. That, coupled with the fact that the pumps operate on
12V DC and are relatively inexpensive ($15-25 a pop), that's what we went with for our design.
The rest of the hardware design was fairly straightfoward. We used an 8051 microcontroller to control
everything, an FTDI UM232 to handle the PC communications and a Parallax RFID reader to read the
tags that are on the bottom of the drinking vessels. The serial communication is pretty interesting
since the USB-to-serial device has only one serial port but two devices to talk to (the RFID reader,
and the PC). Our solution was to have the receive line go to the RFID reader (to the PC), and have
the transmit line go to the 8051 (from the PC). This meant that our 8051 couldn't talk back, so we
had to hope that things were working right. Additionally, Richard developed a simple serial language
for the 8051. If the 8051 received an ASCII 0 through 7, it would turn on that pin on the port we
were using. This could easily be modified to operate with all the ports on the 8051 to control 24
pumps, or even with some addressing logic to control a huge number of pumps.
Below is our hardware schematic, which should give you some idea of how it's all connected.
![Hardware Design](/media/img/idd_hardware/FinalDesign.png)
In my next post, I'll be talking about the software design. Stay tuned!

View file

@ -0,0 +1,93 @@
date: 2009-06-21
title: Intelligent Drink Dispenser - Software Design
---
The other major half of the Intelligent Drink Dispenser project was the software side
of things. As with the hardware, there were some things we had to consider when
beginning the design of things.
---
What language should we use? Should this be a GUI
application or web application? If we go with a GUI application, what operating system
should we target?
We decided that since we only had a semester to get everything done, we needed a
framework to take care of the more nitty-gritty stuff. We eventually decided to go
with the Django web framework, which uses the Python programming language. Python is
a language we all knew and Jon and I had used Django in the past. Also, since our
'final product' would be highly based on a client-server model, it makes sense to
go with a web app since the server can be in a central location with many clients
connecting to it.
The most important thing to us was the fact that Django abstracts the database into models
for us. No having to mess around with raw SQL, we just run some queries and get our data
from the models as if they're just plain ol' objects. After some thought, we came up with
our models and their relationships:
![Database layout](/media/img/idd_software/db.png)
The development of the application was fairly straight-forward. It was just a standard
Django app after all. The interesting part was interfacing with the hardware.
If you remember from my post talking about the hardware side, we used an FTDI UM232
USB-to-serial converter. Lucky for us, there's a Linux kernel module which represents this
device as a virtual serial port, which made our jobs a whole lot easier since there's
existing serial libraries for Python (like pyserial). We ended up just writing a couple of
functions, one to read the RFID tag from the serial port, and one that takes in a Drink id
and then communicates to the microcontroller to pour it. The microcontroller's fairly dumb,
just taking a number and turning that pin on, or turning all of the pins off if a non-valid
character is sent, as you can see in the pourDrink function:
def pourDrink(id, serialPort=DEVICE):
"""Pours the drink given by the specified ID"""
# Seconds per mL of the motors, found experimentally
# These should be in the database or a config file instead of hardcoded...
secondsPerML = []
secondsPerML.append(9.858/350.0) # Pump 0
secondsPerML.append(9.858/350.0) # Pump 1
secondsPerML.append(6.385/210.0) # Pump 2
components = DrinkComponent.objects.filter(drink=id)
for c in components:
stock = IngredientStock.objects.filter(ingredient=c.ingredient)
total = 0
for s in stock:
total += s.amount
if total < c.amount:
raise UnableToPour("Not enough " + c.ingredient.name + " to pour drink!")
port = serial.Serial(serialPort, 2400)
if not port.isOpen():
raise UnableToPour("Unable to open serial port!")
for c in components:
stock = IngredientStock.objects.filter(ingredient=c.ingredient)
leftToPour = c.amount
startTime = time.time()
for s in stock:
port.write(str(s.slot))
if leftToPour < s.amount:
time.sleep(secondsPerML[s.slot]*leftToPour)
s.amount = s.amount - leftToPour
s.save()
break
else:
time.sleep(secondsPerML[s.slot]*(s.amount - ))
leftToPour = leftToPour - s.amount - 30
s.amount = 0
s.save()
print "Time to pour:" + str(time.time() - startTime)
port.write("\n")
port.close()
Not exactly the most precise in terms of pouring accuracy, but it gets the job
done as a prototype. As for the rest of the code, there's not too much else that's
very interesting. I may release the code at some point in the future.

View file

@ -0,0 +1,17 @@
date: 2009-08-18
tags:
- linux
title: 'Stupid Linux Trick #5245'
---
Want to share what you're doing with another person logged into the same system? All you need is a FIFO, cat, and script.
On your session:
mkfifo foo
script -f foo
On the viewer's session:
cat foo
The viewer can then see everything that you're doing as if they're looking over your shoulder!

View file

@ -0,0 +1,35 @@
date: 2009-11-26
tags:
- servers
- linux
title: Moving Servers and Doing It Right
---
Well, I finally bit the bullet and got a Linode account. So far I'm pretty
happy with it. I figured that with the costs of power and bandwidth, I was
almost spending $20/month to run my old server on my own hardware.
Incidentally, the lowest-grade Linode VM costs that much and is enough to
suit my needs.
---
So now that I've been setting up a webserver from scratch again, I'm doing
it right this time. I'm setting up some monitoring software to notify me when
things go down, I'm no longer relying on myself for DNS (no more dynamic IPs!),
and I'm also branching out and trying an alternative webserver.
The webserver in question is [Cherokee](http://cherokee-project.com/) which claims
to use less memory and
perform better than Apache. It sure does use less memory, but as a down side
it doesn't have a native PHP module, so I'm required to use FastCGI for that
purpose. Right now, there's five php-cgi processes running each using about
25-30 MB. This wouldn't be a problem except that I've only got 360 MB of memory
to play with. On the plus side its got a pretty sweet admin interface with wizards
to help you set up things like WordPress, Drupal, Ruby on Rails, Django, etc. and
you can setup some pretty complex rules for what and how files should be hosted.
On the monitoring side of things, I'm using [Munin](http://munin.projects.linpro.no/) to monitor the various
[stats on the server](http://terminalunix.com/munin/), [Piwik](http://piwik.org/) for website visit statistics, and I plan on getting [Monit](http://mmonit.com/monit/) going
for service monitoring. It's a bit more important now that I keep and eye on memory
and data transfer now that I'm limited on that. Also, if some process goes wild and
starts using crazy amounts of CPU power and memory, I'll be able to catch it.
Unfortunately when you move servers, you have to move everything that was running
on them. I'm still in that process, but it's been going pretty smoothly.

View file

@ -0,0 +1,41 @@
date: 2009-11-30
tags:
- programming
- networking
title: Fun With Graphs
---
I've always been utterly fascinated with graph theory, mostly with its
applications to networks. As an added bonus, they can be represented with
pretty pictures!
---
[![NYS Network](/media/img/fun_with_graphs/nys-network-thumb.png)](/media/img/fun_with_graphs/nys-network.png)
(click on image for full-sized version)
That graph represents the network behind Nucor-Yamato Steel and Nucor Castrip
Arkansas, sanitized of sensitive information of course. All of the nodes are
Cisco switches, the yellow boxes representing backbone switches (6500 series
to be exact). This graph is part of the network information system that I've
been working on during the majority of my internship at NYS and gets auto-generated
every day, along with more centralized graphs on a per-switch basis.
The way the system works is that a periodic Python script goes out to a list of
known switches and gathers CDP neighbor information as well as the MAC address
tables. Then Nmap scans are ran every 6 hours to scan for hosts, gathering IP
addresses, hostnames, and MAC addresses. These MAC addresses are correlated with
the MAC tables from the switches to determine which hosts are connected to which
ports on what switches. The CDP neighbor information also gives which switches
are connected to each other, giving a full scope of how the network's connected.
The script which generates the graphs grabs all of that information out of the
database, uses NetworkX and pydot to create the graph, and then graphviz to render
it into a PNG image. The graph is pretty plain, though. The real version shows
switch names and IP addresses. Since the time between graph generation is so long,
any more useful information that I could throw onto the graph would quickly become
outdated. My grand scheme is to make a quickly-updated graph showing live stats like
switch load, link load, link types (fiber, twisted pair, wireless), downed switches,
etc. That way, I (or the network supervisor, I guess...) could have a big-screen TV
displaying the live health of the network.
I've been asked what parameters I set to get that graph to look that way. I didn't set anything special in code, it's all in the command line:
twopi -q -Ksfdp -Tpng -Goverlap="prism" -Eoverlap="prism" -Gsplines="true" -Gratio="compress" -oclean.png clean.dot
Really, I'm just a data visualization nerd looking to get a fix.

View file

@ -0,0 +1,24 @@
date: 2009-12-18
tags:
- programming
- networking
title: More Fun With Graphs
---
I figured I'd post the latest graph magic from my work at Nucor-Yamato Steel.
This is a graph of the entire network, switches AND hosts!
---
[![NYS Network Graph](/media/img/more_fun_with_graphs/nys-super-sanitized-thumb.png)](/media/img/more_fun_with_graphs/nys-super-sanitized.png)
<p class="centered">(click for full-sized version, warning: LARGE image)</p>
Here, the yellow boxes are center switches/routers, the green boxes are switches,
and the peach-colored nodes are hosts. Also, red lines are switch-to-switch
connections and blue lines are switch-to-host connections.
Another somewhat off-topic thing about these graphs is that the manager of IT at
Nucor-Yamato is interested in open-sourcing the code that manages all of the data
and generates these graphs, AND let me use company time to work on and manage the
project. If anyone knows of any open-source project (or software that doesn't cost
an arm and a leg) that already does network discovery, data collection, and automated
mapping then please let me know! If I'm not going to be re-inventing the wheel, then
I'll probably be kicking the project off shortly after I start working full-time in
June 2010.

View file

@ -0,0 +1,21 @@
date: 2009-12-25
tags:
- cooking
title: Holiday Ham Recipe
---
I don't cook/bake all that often, but when I do I like to have some fun
with it. I came up with a ham recipe that turned out well, so I figured
I'd share it with everyone.
---
First, put your ham in a baking pan and cut the diamond pattern into it. Trust
Alton Brown when he says that utility knives work well for this. Then, mix the
following in a bowl
* 1 cup brown sugar
* 1/2 teaspoon ground mustard
* 4 ounces of fine bourbon. My preference is Maker's Mark.
Once you have that all mixed up, pack it gently onto the ham. Uniformity is nice,
but not required. Then cook the ham at 325 degrees Fahrenheit until the inside
meat temperature reaches 150 degrees (about 3 hours with a 20 pound ham). I
guarantee that it'll be tasty when it's done.

View file

@ -0,0 +1,32 @@
date: 2009-12-05
tags:
- programming
- beertraq
title: Beertraq Beta!
---
I'm glad to announce that my recent pet project, Beertraq, is now in a
(somewhat closed) beta stage! The basic idea is there and functioning,
but the extra functionality isn't done and it's far from polished.
Nonetheless, it's time to take her for a test drive!
---
> Note from future-Nick
>
> I got lazy and never finished Beertraq, and Untappd ended up releasing not long after. Though it took them _years_ to implement a barcode scanner in the app, which was going to be a core thing in Beertraq.
So what is Beertraq, you ask? It's a way for you to keep track of which
beers you've tasted, compare those with others, read beer reviews, and
most importantly discover new beers to try. I originally got the idea
from The Flying Saucer's UFO Club, where members work toward a goal of
drinking 200 different beers. Once they complete the task, they get their
name on a plate which gets put on the wall of the bar. The cool part about
the UFO Club is that it's all computerized, using a magstripe card to login
at a kiosk in the bar. You can also log in to their website to check your
progress and read reviews on there. I figured that if The Flying Saucer can
have that system for their bar, I could do the same for the world.
If you're interested in becoming a BeerTraq beta user, send an email to
beertraq (at) beertraq (dot) com with the email address you want to use for
your account. All I ask is that you give feedback by filling out issue requests
with bugs you find or suggestions you might have.

View file

@ -0,0 +1,20 @@
date: 2010-02-09
tags: []
title: Whiteboard
---
If you know me personally, you know that I can be pretty scatterbrained from
time to time. I've desperately needed a whiteboard to keep my thoughts
organized, and I've finally gotten one. I know, it's not the
most exciting thing in the world to talk about, but it should be a change
in the right direction for me.
---
[![whiteboard](/media/img/whiteboard1-thumb.jpg)](/media/img/whiteboard1.jpg)
But, wait. What's this? "Beertraq Road to Stable"?
[![whiteboard2-thumb](/media/img/whiteboard2-thumb.jpg)](/media/img/whiteboard2.jpg)
Part of why I want to get my thoughts organized is because I want a good
view of what all is left before I fully release Beertraq to the public.
With this list looming over my head (literally) it will hopefully get my
butt in gear to reach a stable release.

View file

@ -0,0 +1,60 @@
date: 2010-04-07
tags:
- car
title: New Car
---
It was the Friday before my Spring Break and quite possibly the worst thing in the world happened to me: I got rear-ended while at a stop.
Well, okay, it's not the worst thing in the world but to a college student who loves his car, it was pretty bad. The lady who hit me did quite a
number on my back end, and then pushed me into the car in front of me. Despite my airbags not going off, my car was totaled. The damage to the bumper,
right rear quarter panel, and unibody frame were just too much.
---
[![Damage to the Jetta][4]][3]
With the Jetta totaled I had three options: Buy it back and spend the $1000 left over on repairs/hookers, give it away to the insurance company and
spend <$2400 on a beater from Craigslist, or do the financially irresponsible thing and buy the new car I've been dreaming of, going into debt even
further than I already was.
Guess which option I chose:
[![2010 VW GTI][2]][1]
Yes, that's right. I bought myself a new MkVI GTI. Since I had some money saved up from my recent internship, will be starting work in a couple of months,
and was planning on buying one within a year I decided to go for it. I ended up paying $22,500 for it, which is a few hundred below dealer invoice for the
options that I wanted on it. I won't go into any car buying tips here, I'll save that for another post.
Overall, I'm very pleased with my purchase. Here's why:
**Performance:** It's got the VW 2.0L TSI turbocharged engine in it which puts out 200 hp. That's only a little more than my old VR6 Jetta, but the lighter
engine makes a difference. I'm still breaking the engine in, but from what I can tell from accidentally giving too much gas, it can put down some power.
**Handling:** With the sport-tuned suspension, this thing handles like a beast. With simply taking corners quickly and turning smoothly, I have yet to break
the tires loose from the road. There's also a roundabout here in Rolla which I took at speeds that I'm not at liberty to discuss publicly since I'm sure that
the city police would frown upon that.
The car also has a pseudo-locking differential called XDS. This is part of the electronic stability control system. What XDS does is when it senses that one
wheel is getting too much power compared to the other, it applies the brakes to that first wheel to slow it down, giving more power to the second wheel. I
haven't noticed this kick in yet, but probably because I'm not giving it full throttle due to the break-in period
**Electronics:** Included standard on the car is a touchscreen radio with Volkswagen's MDI interface. The MDI interface provides a port in the arm rest where
different devices can be plugged in, such as an iPod or USB drive. With the addition of an SD card slot right in the radio, with support up to 32 GB, I no
longer need a car computer. The software can be a little flaky at times and could use certain features (like creating an on-the-go playlist), but it suits my needs fine.
Also included is the MFI, which is essentially a trip computer. It tells you trip time, distance, fuel consumption, range left on the tank of gas, etc. The cool
part about this is that in the settings menu, you can adjust some convenience settings which normally could only be done using a VAG-COM, such as which doors
unlock when you use the keyfob, rolling windows down with the key, etc.
**Practicality:** The folks on Top Gear always talk about the practicality of a car whenever they review something the average person could buy. Since I'm not
going through a mid-life crisis, something practical is what I need. The GTI fits this bill nicely.
There's plenty of room for passengers in the back (a change VW made starting with the MkV) along with all of the creature comforts you would expect from sitting
in the front. There's also a good amount of space in the hatch area, and the back seats fold down in case I need to haul anything big. I was able to fit a whole
recliner in the back with room to spare, for example.
All in all, I really love this car. It's quick, fun to drive, can get good gas mileage, and it's still useful for when I need it.
[1]:/media/img/GTI.jpg
[2]:/media/img/GTI-thumb.jpg
[3]:/media/img/jetta.jpg
[4]:/media/img/jetta-thumb.jpg

View file

@ -0,0 +1,99 @@
date: 2010-06-19
tags:
- linux
- htpc
title: LIRC and XBMC
---
Those of you who know me fairly well know that I'm a total HTPC
geek. It's to the point where I outright refuse to subscribe to cable television
or even hook up an antenna to my TV. This geekery combined with my affinity
for Linux leads me to running XBMC on Linux on my little home theater machine.
It's been a pretty smooth experience with the exception of getting my remote
work with it. If you're struggling with it too, hopefully my tales will help you
get it going.
---
So, here's my setup. I've got an Antec Fusion 430 (a silver one
with the VFD), a Logitech Harmony remote, and Ubuntu 10.04. The Antec case is
pretty cool since it looks like it belongs
in my home theater setup, and it
even includes an IR receiver built right into the case! Cool! It should accept
signals from any IR remote, right?
*Wrong*
In the hardware developer's infinite
wisdom, they made it only work with Windows Media Center remotes instead of just
making it a dumb device that passes data along. They actually put
**more** effort
into designing the thing just to make my life harder. Augh! Luckily, when I got
this case my [then-roommate](http://benmurrell.com/) had an Xbox 360 remote which
magically
worked! So I eventually got a Logitech Harmony remote and told it
that my HTPC was actually an Xbox 360. Step one complete.
The next step was
to get LIRC to accept the remote. This is a bit tricky, but luckily I had backed
up my configs. If you're starting out from scratch, here's how to do it in an
Ubuntu system.
First of all, you need to install LIRC:
sudo aptitude
install lirc
During the configuration phase of the install, it'll ask you for
what kind of device you have. I selected *Soundgraph iMON PAD IR/VFD*, which uses
the lirc_imon driver. Unfortunately, since I have the silver Antec Fusion
430 I have the VFD and not the LCD display, which has a slightly different IR
receiver. You have to specify the display_type=1 when the module is loaded.
You can do this by adding a file called lirc-imon.conf to /etc/modprobe.d/ with
[these contents](http://nickpegg.com/stuff/lirc/modprobe.d-lirc-imon.conf).
If you don't want to restart, you'll have to throw commands at the system to reload
the module with the correct options.
sudo service lirc stop
sudo rmmod lirc_imon
sudo modprobe lirc_imon display_type=1
While you have LIRC stopped, you might as well double-check that the IR receiver is
actually receiving data with the following command (hit Ctrl-C to stop):
sudo cat /dev/lirc0
You should see a bunch of garbage get printed to
the terminal when you press buttons on your remote. If you don't, then either
you have the wrong type of remote or you don't need the
display_type argument to modprobe.
Next, you need to setup the button config for your remote. Since
I'm using a Logitech Harmony remote to emulate an Xbox 360 remote, I used the
irrecord command to generate [my config](http://nickpegg.com/stuff/lirc/xbox360.conf).
Luckily there's [plenty of people out there](http://www.google.com/search?&q=lirc++microsoft+remote+config)
who have already done this for you for a large amount of remotes
([here's a good list](http://lirc.sourceforge.net/remotes/), for example).
Once you have the remote config file downloaded or created,
add an include to your [lircd.conf](http://nickpegg.com/stuff/lirc/lircd.conf)
for it, fire up LIRC, and test it out with the irw command.
sudo service
lirc start
irw
When you press buttons, you should see the button commands
scroll by in the terminal.
*\*whew\** Almost there. Still with me? Good, because
we only have one thing left, the XBMC Lircmap.xml file! I'll spare you the nitty-girtty
of it and just give you
[mine](http://nickpegg.com/stuff/lirc/Lircmap.xml) (right-click
and save it). If you feel like making your own or need to tweak mine a bit, the
XBMC wiki has some
[good information](http://wiki.xbmc.org/index.php?title=Lirc_and_Lircmap.xml)
on how to do it.
### For the impatient, here's all of my files associated with getting this to work:
[lircd.conf](http://nickpegg.com/stuff/lirc/lircd.conf) (LIRC)
[hardware.conf](http://nickpegg.com/stuff/lirc/hardware.conf) (LIRC)
[xbox360.conf](http://nickpegg.com/stuff/lirc/xbox360.conf) (remote)
[Lircmap.xml](http://nickpegg.com/stuff/lirc/Lircmap.xml)(XBMC)

View file

@ -0,0 +1,11 @@
date: 2010-06-23
tags: []
title: Verifying Google Voice
---
I've been struggling getting my work phone number verified with Google Voice. The way that number verification works is that Google Voice calls you and asks
you to enter the two-digit code that's displayed on the website. But for some reason when I get a call on my outside line and I press a number button, it
doesn't send that DTMF tone. Instead it tries to place another call.
My solution? I found a DTMF tone generator online and I played the code DTMF tones through my computer speakers.
It's nice to see that these old phone tricks still work.

View file

@ -0,0 +1,15 @@
date: 2010-10-22
tags:
- programming
title: New Website (Again!)
---
Yes, it's that time of year. Time to change my website design
_yet again_!
I got tired of dealing with Wordpress and wrote my own static page generator
in Python, dubbed [Posty](http://github.com/nickpegg/posty). I know, I'm
kind of [re-inventing the wheel](http://ringce.com/hyde) here and there
are some [really nice solutions](http://github.com/mojombo/jekyll/wiki) to
this problem, but why not just make something for making's sake?
Plus, I just feel _cool_ using Markdown and YAML to update my website.

View file

@ -0,0 +1,12 @@
date: 2011-02-20
tags:
- beer
title: Kegerator
---
After getting my tax refund and being sick and tired of bottling my homebrew
beer, I've built myself a kegerator!
[![Kegerator](http://farm6.static.flickr.com/5135/5462665976_d11faea2aa.jpg)](http://www.flickr.com/photos/nickpegg/sets/72157625971333921/)
Details are over at my [project page](/projects/kegerator/), or if you'd
rather just gawk at some pictures, check out the [Flickr set](http://www.flickr.com/photos/nickpegg/sets/72157625971333921/)!

View file

@ -0,0 +1,78 @@
date: 2011-11-25
tags:
- git
- games
title: Syncing Minecraft Saves with Git
---
I've been playing Minecraft for a while and after doing some travelling, I've
ran into the issue where I'd like to syncronize my Minecraft saves across
computers.
---
I already use git for software version control, so why not shoehorn Minecraft
into it? Not only would I get easy syncronization, I would also get version
control so if I seriously mung something up, I can revert back to a previous
save! Here's how I did it.
First, I had to make sure git was installed on all of my machines. Luckily
on Linux git is usually provided in the package repository (git-core), but
since my desktop also runs Windows (for gaming), I use
[msysgit](http://code.google.com/p/msysgit/). For example,
on Debian/Ubuntu all you need to do is:
```
sudo apt-get install git-core
```
Once git was installed, I decided to go with a centralized approach since I
want one 'official' spot where I can push and pull my Mincraft saves to.
I already have a server
from the wonderful folks at [Linode](http://linode.com), so I just
initialized a bare (centralized) repository on there:
```
cd /path/to/repos/minecraft
git init --bare
```
Then, since I already have Minecraft installed on my desktop with quite a few
saves, I had to clone the central repository, add my saves, commit, and then
push back to the central repository.
```
cd /home/nick/.minecraft/
git clone nick@nickpegg.com:/path/to/repos/minecraft temp
mv temp/.git ./
```
Since you can't clone a repository into a non-empty folder, I had to clone it
to a temporary folder and then copy the .git folder from there into my
.minecraft folder. Now that my local repository was setup, I added the files
I wanted to syncronize.
```
git add saves screenshots stats texturepacks options.txt servers.dat
git commit -m 'Initial commit'
```
Once I had the files commited, all I needed to do was push them up to the
central repository on my server.
```
git push
```
And now I have my Minecraft files in a central spot! Now every time I'm done
playing a bit, all I have to do to sync my files up is:
```
git commit -a -m 'Played a bit'
git push
```
Now, on other machines, all I need to do is clone once, ``git pull``
before playing,
and then commit and push when I'm done playing!
Easy peasy. Of course, you can do more fancy things with git since it's a
full-blown version control system. If you feel inlined to play with those
features, go read some [documentation](http://gitref.org/).

View file

@ -0,0 +1,132 @@
date: 2014-08-19
tags:
- linux
- networking
title: Building My Own Home Router, Part 1
---
This is the first of a series of blog posts on building my own home router from scratch using Debian. My hopes are that by sharing my experiences, it can help others in this endeavor.
---
I've been kicking around the idea of building my own router for a while now, mostly due to the fact that my trusty [WRT54GL][1] is grealy limted by what it can do with its measly 4 MB of flash and weak CPU. After months of casually searching and trying (unsuccessfully) to re-purpose some old hardware, I finally found what I've been looking for: a cheap-ish, low-power, rackmount server with more than one NIC.
# The Hardware
* [Intel D2500CCE Mini-ITX motherboard in a rackmount case][2]
* [Intel 7260-ac Mini-PCIe card with antenna][3] - there are some issues with this, which I'll cover in a later post
I can't believe that I didn't think to check the various Mini-ITX resellers for something like this, because this is almost exactly what I've always been looking for. I got a 2-NIC board since I'm cheap and already have a gigabit switch, but you can easily find boards with more ports if you don't mind shelling out the extra cash.
Once the equipment got to my apartment, I slapped in some old laptop RAM and a spare 2.5" drive and got Debian installed.
# 802.3 and IPv4
The first order of business was to replicate the core functionality of my old router: IPv4 routing and Ethernet connectivity. The plan was to use eth0 as my public interface (plugged into my cable modem) and eth1 as my internal interface. Before I even plugged in anything, I wrote a basic `/etc/network/interfaces` file.
```
auto lo
iface lo inet loopback
# outside
allow-hotplug eth0
iface eth0 inet dhcp
hwaddress ether AA:BB:CC:DD:EE:FF
dns-search home.nickpegg.com nickpegg.com
dns-nameservers 8.8.8.8 8.8.4.4
# inside
auto eth1
iface eth1 inet static
address 10.0.0.1
netmask 255.255.255.0
```
Note the `hwaddress ether` line there. Since my ISP (whose name shall not be spoken (not Voldemort, but [just as evil][4])) locks me to a single MAC address, my new router had to spoof my old router's MAC address, which was spoofed from my laptop that I originally set the connection up with. If you seemingly can't get a DHCP lease on your public interface, this is likely the problem.
Now that I had my interfaces configured and rarin' to go, I had to make sure that my ip{,6}tables rules were in order before plugging in.
#### rules.v4
```
*nat
:PREROUTING ACCEPT [2:125]
:INPUT ACCEPT [1:65]
:OUTPUT ACCEPT [4:260]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth1 -o eth0 -j ACCEPT
COMMIT
```
#### rules.v6
```
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
COMMIT
```
The above rules are my output from `iptables-save` and `ip6tables-save`, and are fully compatible with the respective restore programs. Debian even has a nice package called `iptables-persistent` which will load these rules on boot if you stash them as `rules.v4` and `rules.v6` in `/etc/iptables`!
To better understand these rules, it helps to have the [Netfilter packet flow diagram][5] in front of you. There are some simple goals with these:
## IPv4 Filters
* Allow traffic coming from loopback or the inside interface
* Only allow traffic outside->inside if it pertains to an existing in->out connection
* **Secret sauce** - that MASQURADE line enables NAT, so my private addresses can hide behind the one public address that my ISP gives me
## IPv6 Filters
* Just drop everything for now unless it comes from the inside and is destined for the router itself
Of course, before I could have a fully-functional internet connection, I had to get DNS set up. And I guess a DHCP server would be nice to have before my leases all expire and everything drops its IP.
Luckily, there's a software package which is geared towards these very tasks: `dnsmasq`! Getting it running was as easy as running `apt-get install dnsmasq` and `service dnsmasq start`, which was enough to get DNS working. To get DHCP working, I created two config files in `/etc/dnsmasq.d/`:
#### dhcp.conf
```
# My DHCP configs
dhcp-range=10.0.0.110,10.0.0.250,12h
# options for DNS
dhcp-option=option:domain-search,home.nickpegg.com,nickpegg.com
# Static DHCP entries
dhcp-host=00:24:1D:7D:5F:C3,10.0.0.10,host1
dhcp-host=88:30:8A:22:2E:74,10.0.0.11,host2
dhcp-host=00:23:54:1A:16:2D,10.0.0.12,host3
```
#### interface.conf
```
# Only allow DNS/DHCP requests from the inside interface
interface=eth1
```
With all of that, I had a functioning IPv4 router and could do important things again, like idle on IRC and browse Reddit.
This is just the beginning though! You should go check out [part 2][6] of this series where I get 802.11 working.
[1]: http://en.wikipedia.org/wiki/Wrt54gl
[2]: http://mitxpc.com/proddetail.asp?prod=ER1UD2500DLM02
[3]: http://mitxpc.com/proddetail.asp?prod=INTWIFI7260AC
[4]: http://www.comcast.com/
[5]: http://upload.wikimedia.org/wikipedia/commons/3/37/Netfilter-packet-flow.svg
[6]: http://nickpegg.com/2014/8/building_my_own_home_router,_part_2_-_802.11.html

View file

@ -0,0 +1,151 @@
date: 2014-08-22
tags:
- linux
- networking
title: Building My Own Home Router, Part 2 - 802.11
---
In my [last post](http://nickpegg.com/2014/8/building_my_own_home_router,_part_1.html) I talked about getting my home router up and forwarding packets from nothing and getting my computers connected via Ethernet. The next step is to get 802.11 (WiFi) working.
---
# Hardware Caveats
In my last post, I mentioned that I got the Intel 7260-ac card, which I've had some problems with. Intel decided to code into the EEPROM that the card can only use channels that make it compliant with *every* country's laws, and the firmware and Linux driver dutifully read this information and comply. This means that the card can only work in AP mode on channels 1-11, and will NOT in the 5GHz band. This means that you're stuck to the noisy 2.4 GHz band and can't even use 802.11ac (since it requires 5GHz).
I've seen some various blog and forum posts where the OpenWRT people have gotten around this on cards with atheros chipsets since it's just a check in the driver. However, in the small amount of kernel driver hacking I've done, I've been unsuccessful.
**Long story short**, watch which card you pick up and make sure people have had luck making it do what you want to, preferrably without having to patch kernel drivers.
# Network Changes
Since you're turning your router into a wireless access point, you have two options to connect clients to your network: split them off into their own network segment in a different subnet, or bridge the wireless interface in with your inside network and let wireless users mingle with your wired users. I chose the latter, since it was simpler.
The basic idea is that you create a bridge device (`br0`) and bridge in your `eth1` and `wlan0` interfaces. My updated config shows the changes you need to make to `/etc/network/interfaces`:
```
auto lo
iface lo inet loopback
# outside
allow-hotplug eth0
iface eth0 inet dhcp
hwaddress ether AA:BB:CC:DD:EE:FF
dns-search home.nickpegg.com nickpegg.com
dns-nameservers 8.8.8.8 8.8.4.4
# inside
iface eth1 inet manual
iface wlan0 inet manual
auto br0
iface br0 inet static
address 10.0.0.1
netmask 255.255.255.0
bridge_ports eth1
```
Note that `br0` has pretty much taken the place of `eth1` in the config. Also, we don't bridge in `wlan0` since our access point daemon will take care of that.
Along with this change in `/etc/network/interfaces`, don't forget to also change your dnsmasq settings so that it listens on `br0` instead of `eth1`.
Install the `bridge-utils` package if you haven't already and restart networking. Congrats, your router is now a one port network switch!
# Firewall Changes
Since our inside interface is now `br0`, we have to tweak our firewall rules a bit.
#### rules.v4
```
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -o eth0 -j MASQUERADE
COMMIT
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
-A INPUT -i br0 -j ACCEPT
-A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A FORWARD -i eth0 -o br0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i br0 -o eth0 -j ACCEPT
COMMIT
```
#### rules.v6
```
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -i eth1 -j ACCEPT
-A INPUT -i br0 -j ACCEPT
COMMIT
```
# HostAPd Config
Thanks to [`hostapd`][2], getting your wireless card running in AP mode is a cinch! It's just a package install away in most cases, and the configuration isn't *too* terrible. Below is my config, annotated to make it easier to understand.
```
# Set up some logging. VERY useful to see why things aren't working.
logger_syslog=-1
logger_syslog_level=2
logger_stdout=-1
logger_stdout_level=2
# Which interface to use and which bridge to join
interface=wlan0
bridge=br0
# Use this driver for AP stuff. This corresponds to the mac80211 driver
# which most newer cards support.
driver=nl80211
# 802.11 mode and channel, pretty self-explanatory
hw_mode=g
country_code=US
channel=11
# Set and broadcast the SSID. Stupid double-negatives...
ssid=test_net
ignore_broadcast_ssid=0
# 802.11N stuff - Try 40 MHz channels, fall back to 20 MHz
ieee80211n=1
ht_capab=[HT40-][SHORT-GI-20][SHORT-GI-40]
# WPA Authentication
auth_algs=1 # Open authentication, no WEP
wpa=2 # WPA2 only, set to 3 for WPA+WPA2
wpa_passphrase=xxxxxxxxxxx # Hah! Like I'd put this in a gist.
wpa_key_mgmt=WPA-PSK
rsn_pairwise=CCMP # Cipher for WPA2 (AES in this case)
# Don't use a MAC ACL
macaddr_acl=0
```
The things to watch out for are the settings that are ORs of bits, like `auth_algs` and `wpa`. When setting up your own AP, it's a good idea to check out the [example config][3] to see what each setting does and what the defaults are.
My config doesn't include any 5 GHz settings, so you'll have to figure those out on your own if you're lucky enough to have a card that supports it. If I get mine working, I'll make another post with those settings.
Once you're done with configuration, fire up `hostapd` with `service hostpad start`. If everything was successful, you should see `wlan0` bridged in (use the `brctl show` command to check) and the network should be joinable by one of your wireless devices. If you don't see that, you'll want to check `/var/log/syslog` to see what hostapd is complaining about.
And there you have it, a router with wireless! Next up is IPv6 support, so stay tuned for part 3.
[1]: http://nickpegg.com/2014/8/building_my_own_home_router,_part_1.html
[2]: http://wireless.kernel.org/en/users/Documentation/hostapd
[3]: http://w1.fi/gitweb/gitweb.cgi?p=hostap.git;a=blob_plain;f=hostapd/hostapd.conf

View file

@ -0,0 +1,136 @@
date: 2015-08-16
tags:
- linux
- networking
- ipv6
title: Building My Own Home Router - IPv6 Tunnel
---
Continuing on my adventure of running my own self-built router at home, I decided to get IPv6 running on my home network. As of writing this blog post, my ISP doesn't do native IPv6 yet so I decided to go with Hurricane Electric's [IPv6 Tunnel Broker](https://tunnelbroker.net) service, which provides you with an IPv6-in-IPv4 tunnel.
---
# Creating the tunnel
The first step is going to [HE's Tunnel Broker website][1] and creating a regular tunnel. Set your IPv4 endpoint to your router's public IP address and be sure to pick a tunnel server close to you.
Once the tunnel's been created, you'll want to grab the following information:
* Server IPv4 address
* Server IPv6 Address (this will be your route to the outside world)
* Client IPv6 Address (this will be your router's address)
* Routed /64 (this is the block of IPv6 addresses for your network)
If you run multiple subnets, you can create a /48 block, but for my uses I just need a single subnet (/64 block), so that's what I'll be covering.
# Updating the firewall
Before we even fire up the tunnel, we want to make sure it'll be secure when it comes up. This is a little different from my [first post][2] which covered IPv4 since we won't be using a NAT, but instead directly routing packets.
The goals of these firewall rules will be to:
* Allow traffic related to already-established outbound connections
* Allow ICMPv6 Destination Unreachable
* Allow ICMPv6 Echo Request
* Allow ICMPv6 Neighbor Soliciation and Advertisement on the local network (interface br0)
* Allow all traffic coming from the local network (interface br0) out the world (interface he-ipv6)
* Drop everything else
Since we are doing regular routing, all rules on the INPUT chain will manage traffic directed to the router itself and all rules on the FORWARD chain will manage routed traffic (between the local network and the internet).
Here's what my `/etc/iptables/rules.v6` file looks like with all these rules applied. Note that the default policy on `INPUT` and `FORWARD` are `DROP`.
```
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p ipv6-icmp -m icmp6 --icmpv6-type echo-request -j ACCEPT
-A INPUT -p ipv6-icmp -m icmp6 --icmpv6-type destination-unreachable -j ACCEPT
-A INPUT -i br0 -p ipv6-icmp -m icmp6 --icmpv6-type neighbour-solicitation -j ACCEPT
-A INPUT -i br0 -p ipv6-icmp -m icmp6 --icmpv6-type neighbour-advertisement -j ACCEPT
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -p ipv6-icmp -m icmp6 --icmpv6-type echo-request -j ACCEPT
-A FORWARD -p ipv6-icmp -m icmp6 --icmpv6-type destination-unreachable -j ACCEPT
-A FORWARD -i br0 -o he-ipv6 -j ACCEPT
COMMIT
```
# Updating the `interfaces` file
Once you have the firewall rules in place, it's time to update the `/etc/network/interfaces` file for the tunnel. There are two additions that we need to make: An IPv6 address for your internal network's interface and a virtual interface for the tunnel.
This is where you'll use the details you got from the Tunnel Broker website. Everything will mostly be directly used, however you need to choose an address from the Routed /64 block for your router's internal interface. The first one in the block is convenient, so if your block is `2001:470:6661:7274::/64` then your router's address will be `2001:470:6661:7274::1` (`2001:470:6661:7274::0` is technically the first address, but using 1 is less confusing since it's similar to IPv4 addressing).
Here's what my `/etc/network/interfaces` files looks like after those changes, the IPv6 additions at the end. Be sure to replace the variables in the file with the values you got from the Tunnel Broker website.
```
# The loopback network interface
auto lo
iface lo inet loopback
# outside
allow-hotplug eth0
iface eth0 inet dhcp
hwaddress ether AA:BB:CC:DD:EE:FF
dns-search home.nickpegg.com nickpegg.com
dns-nameservers 8.8.8.8 8.8.4.4
iface eth0 inet6 auto
iface eth1 inet manual
iface wlan0 inet manual
auto br0
iface br0 inet static
address 10.0.0.1
netmask 255.255.255.0
bridge_ports eth1
iface br0 inet6 static
address 2001:470:6661:7274::1
netmask 64
auto he-ipv6
iface he-ipv6 inet6 v4tunnel
address $CLIENT_IPV6_ADDRESS
netmask 64
endpoint $SERVER_IPV4_ADDRESS
gateway $SERVER_IPV6_ADDRESS
```
Once you make these changes you'll be able to run these commands to start/restart your interfaces to fire up the tunnel:
* `sudo ifdown br0; sudo ifup br0`
* `sudo ifup he-ipv6`
Note that if you're SSH'd into your server, you should run the first command in screen because you're going to lose connectivity for a few seconds. Once your tunnel's up, you should be able to `ping6 ipv6.google.com` and get a response.
# Setting up Router Advertisements
Getting the router talking IPv6 is only the first half. Now we need to have the devices on our local network pick up IPv6 addresses using a mechanism called [Router Advertisement][3]. Fortunately there's a Linux package called `radvd` which is incredibly easy to set up.
Here's what a basic `/etc/radvd.conf` will look like. Again, be sure to replace `$ROUTED_64` with the block you were assigned via the Tunnel Broker website.
```
interface br0
{
AdvSendAdvert on;
prefix $ROUTED_64
{
};
};
```
Yeah, that's it. Start the `radvd` service and everything should get an IPv6 address. From a machine that's not your router, you can `ping ipv6.google.com` to verify that connectivity's working.
Now that's all done, you have a router that talks IPv6 with the world and you can feel a little bit better about the whole [IPv4 exhaustion][4] issue.
[1]: https://tunnelbroker.net/
[2]: https://nickpegg.com/2014/8/building_my_own_home_router,_part_1.html
[3]: https://tools.ietf.org/html/rfc4861#section-6
[4]: https://en.wikipedia.org/wiki/IPv4_address_exhaustion

View file

@ -0,0 +1,160 @@
date: 2017-06-25
tags:
- bike
- touring
title: North Bay Area Bike Tour Log
---
In April of this year, I attempted to bike the [Katy Trail](https://mostateparks.com/park/katy-trail-state-park), but I forgot how unpredictable weather could be in Missouri that time of year and got rained out. While visiting with my parents waiting for my flight back to California, I decided to do a short bike tour in the San Francisco Bay Area where at the very least I wouldn't have to deal with rain.
---
Set out on this trip on July 21 and returned to my home in Berkeley on July 24. This is the log I kept while on the road, transcribed with minimal editing from the notebook I kept with me.
In addition to my notes, I also took a bunch of pictures which you can find on [Flickr][9].
# The Plan
* Day 1 ([Google maps][2])
* Start in Berkeley, ride to Oakland
* Take the ferry from Oakland to SF
* Ride up to Samuel P Taylor State Park and camp
* Day 2 ([Google maps][3])
* Ride over to CA Highway 1 (the Pacific Coast Highway)
* Follow Highway 1 to Bodega Dunes State Park and camp
* Day 3 ([Google maps][4])
* Ride east to Santa Rosa, get lunch
* Make the climb over the mountains to Calistoga
* Camp at Bothe-Napa State Park
* Day 4 ([Google maps][5])
* Ride south to Napa, get lunch there
* Ride all the way back home
# Day 1
This is abridged since I'm writing this after the fact
Rolled out @ 08:00, hit up Suzette Crepe Cafe and then Safeway for food. Got some tortillas, bananas, and Nutella. That combo makes for a killer snack!
Caught the 10:15 ferry from Oakland to SF. More crowded than expected, probably because it's summer and kids are out of school. Was on the Gemini which was a fast ride (faster that weekends).
SF was uneventful, normal tourists on their way to the bridge. The west path was closed, so it was more of a shitshow than usual getting across (with not only biking tourists, but pedestrians too). Plus it was foggy and freezing. It felt like a 20 degrees warmer after the bridge. The east path meant I came down Alexander Drive which saved the climb up the hill before Sausilito.
Got lunch at Sausilito Gourmet Deli. Had a _really_ nice backyard seating area. Ended up [taking El Camino Alto][6] up as a challenge, wasn't too bad and dumped me right into Corte Madera.
Took (bicycle) Route 20 along the creek to Fairfax, was really nice.
Stopped on a park bench at Fairfax to rest in the mid-afternoon.
Fucking hot.
And that fucking climb out of Fairfax! Holy crap!! It was tough in the heat. While I was stopped just before the crest, a mountain biker stopped to make sure I was good. Really lifted my spirits. He mentioned a shady way parallel to Sir Francis Drake Blvd. I should have followed his advice!
Got to Sam P. Taylor alright after that.
At camp, I met:
* __Quinn__ - He hitched a ride via Craiglist from North Carolina to Portland, bought a bike and was riding down the Pacific Coast Highway to SF
* __Don__ - Last lived in St. Louis (Richmond Heights) but used to live in the Bay Area. 70 years old! Chill dude, was drinking Old English when I showed up. Used to work in the fish business.
* __French guy__ - was with his two sons, from Mill Valley, works as a software engineer for Salesforce.
# Day 2
Departed Sam P Taylor, got the anxious feeling of venturing somehwere new (since I had been to Sam P Taylor before and knew my way there). Took the bike path to the end, then a serious climb up Sir Francis Drake with _zero_ shoulder and a 55 mph speed limit. Yikes! The descent into Olema was gorgeous though!
Highway 1 was smooth going, shady, and cool. Nice change. Scenery of Tomales Bay was nice, got breakfast at Bovine Bakery in Pt. Reyes Station. Giant scone!
Biking right along the coast was really nice. Found a place on the side of the road that was shady. Took a quick rest there, almost fell asleep!
Got to Tomales, got some early lunch (lamb sandwich with feta, onions, pepperocini, and horseradish sauce, yum!). As usual, had a giant climb after the town. Bunch of ups and downs that I'll probably have to re-visit tomorrow.
Headwind from Tomales. :( I'd rather have a headwind than that oppressive heat! No shoulder on Highway 1 in many places, even on tough climbs.
Gentle coast down toward the coast, which gave me a second wind! Grass got greener, could smell the ocean breeze, it was lovely.
Decent climbs into Bodega Bay, again no shoulder, but at least the speed limit was 25 mph.
The guys I met at Sam P Taylor warned me about the sand at Bodega Dunes State Park, they weren't kidding. Nice big hiker/biker site. Lots of people wrote stuff on the food locker there, mostly names, dates, and their route (and some words of wisdom, like "count the smiles, not the miles"). Got dinner at Spud Point Crab Co. Crab roll and clam chowder. Good stuff.
Beach was a short ride away. Nice but kinda hot. Water was feezing as expected. Chilled there for about an hour before coming back to camp.
No other cyclists. At least I still have Caliban's War to finish reading.
Late, like 8 pm or so a couple showed up who were from Quebec City. They were going from Portland to SF, doing about 70 miles a day! They're getting married in the fall. That's about the extent of the conversation I was able to get out of them, they just hung out in their tent as soon as they got that set up.
# Day 3
Writing this at Bothe-Napa State Park. Hot but fun day. About 2500 feet of climbing, but nothing demoralizing. Maybe it gets easier past day 2!
Woke up to it raining on my tent! It was misting and droplets were falling from the trees. Cold morning, but after about a mile had to take my jacket off.
Took Highway 1 (back the way I came) to Bodega Highway. While on Bodega Highway a truck passed me kicking up all kinds of crud and I got something stuck in my eye. Had to stop and flush it out with water, I'm sure I looked like a goofball. Tip: wear sunglasses, even when it's foggy.
Got second breakfast at a bakery (Wild Flour Bread) in Freehold. Popular place. Met some poeple on a supported bike tour coming from Tahoe.
As usual, had a big climb out of the town I stopped in. Nothing stressful though, despite the 6-8% grade according to the map. Took windy back roads after the climb. Much less traffic and amazing views. Lots of vineyards and giant homes up there.
Rode down into Sebastopol and jumped on the Joe Rodota Trail into Santa Rosa. That was super-chill, but turned into a hobo highway in Santa Rosa city limits.
Ate at Franchettis' which Jason Wilson from Dropbox recommended. Was great, but I was out of place as the smelly cyclist, hah!
Went north out of town and took the Mark West Springs, Porter Creek, Petrified Forest route. I didn't pay to see the petrified forest but made a water stop.
Honestly, the climbing wasn't all that bad. The descent was _crazy_ though. Gorgeous and fast. I should have turned Strava on, I must have exceeded my record speed and hit 40 mph. On a touring bike no less! I had a stupid grin on my face the entire way down.
Once in Calistoga, I went to go see Old Faithful. Not _that_ Old Faithful, just the California knock-off. Still neat, but I wouldn't go again.
Got a carnitas super burrito and a six pack of 21st Amendment's Hell or High Watermelon from a taqueria/bodega combo. Dug into that right before writing this. Life is good.
# Day 4
I slept like a baby. Either I'm getting used to camping, or it was the beer.
According to Google Maps:
* 65 miles
* 1100 feet climbing
* 1500 feet descending
__OH YEAH__
The elevation profile shows a nice gentle descent through the Napa Valley. Gonna be a fun day albiet maybe long.
Camp breakfast was oatmeal with peanut butter mixed in. How did I not think of this earlier?
Jumped north to catch Silverado Trail Road. It's a cyclist's paradise: little traffic, rolling hills, shade, decent "bike lane" (shoulder).
Got a pastry and cup of coffee at Napa Valley Coffee Roasting Company in St. Helena. Damn good coffee.
Decent headwind, but still downhill. Awesome view of vineyeards. So many cyclists that I'm getting tired of waving at all of them.
Got into Napa at 11:00 on the dot. Nice and cool compared to Marin County! Hung out for a bit and got a couple of slices at Velo Pizzeria, which has the most legit NY style I've had in California.
Riding south of Napa is all along California Highway 29, with some side roads. The shoulder is nice and wide, but it's still stressful having cars fly by at 55 mph.
American Canyon feels like American Dream-land. Reminds me of pictures of neighborhoods from the 1950s.
Vallejo has some decent riding on trails or bike lanes near Napa River. But then it's back on CA-29. Took some side roads before the bridge, but with some annoying hills.
The Carquinez Bridge Trail is a nice, chill ride. San Pablo Ave up from Crockett is the opposite, but no traffic.
This part of the Bay Area is so weird, vastly different towns right next to each other.
* Crockett - feels like a small fishing village
* Rodeo - feels like a dead midwestern town. Seemed like 50% of downtown was empty storefronts.
* Hercules - affluent, new, shiny. Good biking
* Pinole - suburban, standard
Starts getting sketchier the closer you get to Richmond.
Starting to get angry at hills.
Google Maps is an asshole. Why does it think that [Sarah Drive in Pinole][7] is an acceptible bike route? It's like a 10+% grande. Should have followed the Krebs map through Richmond.
San Pablo Dam Road sucks. Horrible, narrow shoulder, I think they consider it a bike lane. I almost flipped catching a wheel in a grate.
Stupid hills.
I could hear the BART trains as I approached the Ohlone Greenway. Oddly soothing after travelling 200 miles. Once I hit the greenway, I played [some punk][8] from my phone and pounded out the last few miles home.
[1]: https://mostateparks.com/park/katy-trail-state-park
[2]: https://goo.gl/maps/yNz9rwYTapm
[3]: https://goo.gl/maps/CvzrM19eSns
[4]: https://goo.gl/maps/AxF8DUd7yMQ2
[5]: https://goo.gl/maps/EP2WcASHwos
[6]: https://www.strava.com/activities/1047975978
[7]: https://goo.gl/maps/dweuGANk6D72
[8]: https://www.youtube.com/watch?v=sxLEuoK0xFs
[9]: https://www.flickr.com/photos/nickpegg/sets/72157682416796352/

View file

@ -0,0 +1,72 @@
date: 2017-09-05
tags:
- linux
- git
title: Visualizing Web Design Evolution Using Git
---
My website here, as of the time of writing this, is still based on a design I made back in 2010, and is rendered using my [static site generator](http://github.com/nickpegg/posty) that I haven't touched in nearly as long. The site's served its purpose pretty well, but it's kind of a mess; It's unreadable on mobile devices, the CSS causes some weird inconsistencies, and the static site generator is no where near my current standards. So since this is a personal project I have the liberty of throwing it all in the trash and starting over (and learning new things along the way!).
---
Since my weakest area is front-end (design, Javascript, CSS that doesn't look like it was written by a crazy person, etc.), I decided to jump in there, doing a couple of experiments. I ended up spending the better part of a weekend fiddling with HTML, playing with a couple of CSS frameworks to see what I liked, and incessantly bugging my friend [Brian](https://bokstuff.com) for help. Eventually I got something that I thought looked pretty good and got the 'final' version checked into git.
So you want to know the cool part about git? If you use it right, you have a bunch of commits containing the full history of what you're building! And with a bit of magic you can come up with something like this:
[![Progress so far](/media/img/design_vis/progress.small.gif)](/media/img/design_vis/progress.gif)
(click on image to see the full size version)
Neat, huh? So how the heck did I manage to pull this off? With some shell scripting wizardry!
Since all of my design is in a single HTML file, `index.html`, it's easy to comb through the history with the `git log` command. And to get the commit hashes to iterate over them, just add in some `grep`, `awk`, and `tac` to reverse-sort them (from oldest to newest).
```
git log -- index.html | grep commit | grep -v initial | awk '{print $2}' | tac
```
Okay, cool, so now we can flip through the history of our `index.html`, now how do we make an animated GIF of it? Well, an animation is just a set of images, so we need to figure out how to turn our HTML into an image a bunch of times. This is where [wkhtmltopdf](https://wkhtmltopdf.org/) comes in handy! The name's kind of a mouthful, but it's a tool that uses WebKit to render HTML and output that to a PDF (or an image). It's super simple to use! Just give it a URL or file name, and then a file to output to, and it does the rest.
```
wkhtmltoimage --width 1920 --height 1080 index.html index${NUM}.png
```
Alright, now we've got a bunch of images, how do we string those together into a GIF? For things like this, I always turn to ImageMagick's `convert` tool, which is the swiss-army-knife of image manipulation. It turns out that if you pass it a bunch of still images and a filename that ends in `.gif`, it just knows to make a GIF! Incredible! Since we want it to slowly go through the changes so you can play spot-the-difference, we add in a `-delay 100` to the command to tell it to wait 100 tens of milliseconds between each frame.
```
convert -delay 100 index*.png progress.gif
```
Add in some hackery to remove duplicates (because the rendered page may not change if you change the HTML) and to add a pause of the last frame, and this is what I came up with:
```
#!/bin/bash
# requires that imagemagick and wkhtmltopdf are installed
mkdir -p progress
git checkout master
mkhtmltoimage --crop-w 1920 --crop-h 1080 https://nickpegg.com progress/0000.png
count=0
commits=$(git log | grep commit | grep -v initial | awk '{print $2}' | tac)
for commit in $(echo $commits | xargs); do
git checkout "$commit"
i=$((++count))
wkhtmltoimage --crop-w 1920 --crop-h 1080 index.html "progress/$(printf "%04d" "$count").png"
done
# Magical one-liner to remove duplicates
md5sum progress/* | \
sort | \
awk 'BEGIN{lasthash = ""} $1 == lasthash {print $2} {lasthash = $1}' | \
xargs rm
# Add an artificial pause by copying the last file a few times
for i in $(seq $((count+1)) $((count+5))); do
cp progress/$(printf "%04d" "$count").png progress/$(printf "%04d" "$i").png
done
convert -delay 100 progress/*png progress.gif
git checkout master
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
public/media/img/GTI.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 616 KiB

BIN
public/media/img/compaq.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 250 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 387 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

BIN
public/media/img/jetta.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 980 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 100 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

File diff suppressed because one or more lines are too long

184
site.py Executable file
View file

@ -0,0 +1,184 @@
#!/usr/bin/env python
"""
CLI command to manage my site
Imports posts from Posty, builds YAML files into JSON blobs, etc.
This may be the start of Posty 2.0, who knows.
"""
import click
from dateutil import parser as date_parser
import json
import markdown
import os
import shutil
import sys
import yaml
@click.group()
def cli():
pass
@cli.command()
def init():
"""
Initialize a site in the current directory
"""
for directory in ('_media', '_pages', '_posts'):
if not os.path.exists(directory):
os.mkdir(directory)
@cli.command()
@click.option(
'--path',
help='Path to output JSON file',
default='site.json',
show_default=True
)
def build(path):
"""
Build posts and pages JSON files
Takes all of the YAML in _pages and _posts, combines them into JSON blobs
and writes them out to disk.
"""
if not all([os.path.exists('_pages'), os.path.exists('_posts')]):
raise click.UsageError('You must run `init` first!')
tags = set()
blob = {
'pages': [],
'posts': [],
'tags': [],
}
pages = []
for filename in os.listdir('_pages'):
contents = open(os.path.join('_pages', filename)).read()
_, meta_yaml, body = contents.split("---\n")
page = yaml.load(meta_yaml)
# page['body'] = render(body.strip())
page['body'] = body.strip()
page.setdefault('parent')
pages.append(page)
blob['pages'] = sorted(pages, key=lambda x: x['title'].lower())
posts = []
for filename in os.listdir('_posts'):
contents = open(os.path.join('_posts', filename)).read()
parts = contents.split("---\n")
post = yaml.load(parts[0])
post['date'] = post['date'].isoformat()
post.setdefault('tags', [])
if len(parts[1:]) == 1:
post['blurb'] = parts[1]
post['body'] = parts[1]
elif len(parts[1:]) == 2:
post['blurb'] = parts[1]
post['body'] = "\n".join(parts[1:])
else:
raise click.UsageError("Got too many YAML documents in {}".format(filename))
# post['blurb'] = render(post['blurb'].strip())
# post['body'] = render(post['body'].strip())
post['blurb'] = post['blurb'].strip()
post['body'] = post['body'].strip()
for tag in post['tags']:
tags.add(tag)
posts.append(post)
blob['posts'] = sorted(posts, key=lambda x: x['date'], reverse=True)
blob['tags'] = list(tags)
with open(path, 'w') as f:
f.write(json.dumps(blob))
@cli.command()
@click.option(
'--path',
help='path to the Posty site',
required=True
)
def posty_import(path):
"""
Import posts and pages from an existing Posty 1.x site
All YAML files are read in and in the case of posts, a blurb is generated
if one doesn't already exist by singling out the first paragraph.
"""
if not all(os.path.exists('_pages'), os.path.exists('_posts')):
raise click.UsageError('You must run `init` first!')
click.echo('Importing site at {} ...'.format(path))
# Simply copy pages over, nothing special to do
for page in os.listdir(os.path.join(path, '_pages')):
orig_path = os.path.join(path, '_pages', page)
new_path = os.path.join('_pages', page)
shutil.copy(orig_path, new_path)
old_posts_path = os.path.join(path, '_posts')
for post in os.listdir(old_posts_path):
old_post = open(os.path.join(old_posts_path, post)).read()
click.echo(post)
new_post = convert_from_posty(old_post)
with open(os.path.join('_posts', post), 'w') as f:
f.write(new_post)
click.echo('Done!')
# Utility functions
def convert_from_posty(old_post):
"""
Converts an old Posty post (a string) into a new-style post with a blurb
and everything. Returns a string containing the three YAML documents.
"""
old_post = old_post.replace("\r\n", "\n")
docs = old_post.split("---\n")
new_post = ''
# Convert the metadata
meta = yaml.load(docs[1])
meta.setdefault('tags', [])
new_post += yaml.dump(meta)
# Create a blurb out of the first paragraph
body = docs[2].strip().split("\n\n")
blurb = body[0]
rest_of_post = "\n\n".join(body[1:])
new_post += "---\n"
new_post += blurb
# Drop in the rest of the post
new_post += "\n---\n"
new_post += rest_of_post
return new_post
def render(thing):
"""
Renders a specific thing using Markdown
"""
return markdown.markdown(thing, extensions=[
'markdown.extensions.fenced_code',
])
if __name__ == '__main__':
cli()