Category Archives: Technology

Installing Node.js on Debian

Node.js is a platform built on Chrome’s JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.

An old version of Node.js is available in official repo for Debian Sid(unstable).

To build a package and install it on Debian (as root):

apt-get install python g++ make checkinstall
mkdir ~/src && cd $_
wget -N http://nodejs.org/dist/node-latest.tar.gz
tar xzvf node-latest.tar.gz && cd node-v*
./configure
checkinstall #(remove the "v" in front of the version number in the dialog)
dpkg -i node_*

To uninstall:

dpkg -r node

MySQL Database Backup and Restore

To backup or restore a MySQL database, use the following commands:

Backup:

# mysqldump -u db_user -p[password] [database_name] > dump_file.sql 

Restore:

# mysql -u db_user -p[password] [database_name] < dump_file.sql

The dump_file.sql file will include all the information necessary to drop and re-create any table contained therein. However, if you are using this dump / restore mechanism to keep a development database in a “as needed” sync with production, it would probably be best to add a step of entering into mysql and dropping / recreating the database. This is because the dump file will not remove any tables in your schema that are not contained in the dump file.

# mysql -u db_user -p
> drop database database_name
> create database database_name

Git: Push to Remote Branch

The push command has the form of

git push remote_name source_ref:destination_ref

 Example:

git push origin +branch42:branch42

The plus is optional and allows non-fast-forward updates.

Alternate syntax is

git push -f origin branch42

if you omit the destination, it’s implied that it’s the same name. If tracking is set up to a particular branch on the remote it will go to that one. The -f is –force.

Deleting branches has 2 syntaxes, the old:

git push -f origin :branch42

and

git push --delete origin branch42

The first is read as “push nothing into branch42” which deletes it.

One trick is that if you specify . as the remote name, it implies the current repo as the remote. This can be used for updating a local branch without having to check it out:

git push . origin/master:master

will update master without having to checkout master.

git: revert (reset) a single file

I’ve made the leap from subversion to git.  I really like git, but there are a few things that confused me.  One is how to (in svn terms) revert an uncommitted file back to the latest version of the file under source control.

git checkout filename

This will checkout the file from HEAD of the current branch, overwriting your changed file.  Since this is the same command used to checkout branches, if you have a file with the same name as a branch you have to make a slight change.

git checkout -- filename

You can also pull files from any location in your repository like this.  man git-checkout for all the details.

git checkout [<tree-ish>] -- [<paths>...]

If you need to revert (reset) all your uncommitted work, the command is

git reset --hard

Connecting to a Crashplan instance in the Cloud

Crashplan is a great “free” utility to automate the backing up of all your systems. I have been using it for almost a year to backup all my computers including my cloud instances.

I had to perform some maintenance on one of my cloud computers and forgot how to connect the admin tool to it. Below I’m outlining the steps so I won’t forget again.

* Create a ssh tunnel to the machine

ssh -L 4200:localhost:4243 host

* Updated local CrashPlanDesktop to use port 4200

vi /usr/local/crashplan/conf/ui.properties (change port to 4200)

* Run CrashPlanDesktop

/usr/local/bin/CrashPlanDesktop

* Don’t forget to revert the changes to the configuration file

vi /usr/local/crashplan/conf/ui.properties (change port back to 4243)

 

Empty Postfix Mail Queue

Had an issue today where a bug sent over 20,000 messages into my postfix mail queue. Google then started “rate” limiting me as this issue was basically a DOS attack on my mail box. After some research, I found a couple ways to empty the postfix mail queue.

If you only want to purge the queue of email from use user, as root try:

mailq | tail +2 | grep -v ‘^ *(‘ | awk ‘BEGIN { RS = “” } { if ($8 == “user@example.com” && $9 == “”) print $1 } ‘ | tr -d ‘*!’ | postsuper -d –
To purge the entire queue:

postsuper -d ALL

 

Bulletproof Web Design

Bulletproof Web DesignImproving flexibility and protecting against worst-case scenarios with XHTML and CSS by Dan Cederholm

Let me start off by saying I’m a programmer, not a designer. Before reading Bulletproof Web Design, I had a basic understanding of just enough CSS structure and concepts to get by. This approach lead to very inefficient markup that was hard to read and maintain. What I was missing was a deeper understanding as to when to use the different constructs and why. I found this and much more in this book.

Dan Cederholm used a brilliant format in Bulletproof Web Design. Each chapter takes a single concept illustrated by an example site that employees a traditional “unbulletproof” approach and explains the pitfalls of the traditional methods. He then deconstructs the page and rebuilds it step-by-step using semantic XHTML and CSS. The books step-by-step approach of modifying only a couple lines of CSS and explaining the results make the book a quick, yet informative read.

The book starts by explaining why and how to design your site for flexible text sizes. He uses this as the driving point for the rest of the book. How to make your navigation, tables, tabs, lists, widget boxes, rounded corners, and layouts flexible. How to design your site to be valuable to users who either can not or choose not to use images and / or style sheets. The book ends with the step-by-step approach of creating a page that ties every concept together.

Reading the book has made me feel much more confident in my CSS usage. I have already seen the payoff as I have used the methodologies in the book to both design new widgets and to refactor existing code. I feel lucky to have stumbled upon it and am looking forward to reading his sequel book, Handcrafted CSS — More Bulletproof Web Design.

ActiveRecord object caching in Ruby on Rails

Yesterday I started playing with basic caching for PriceChirp and tried what I thought would be easy. Boy was I wrong. It turns out what I was attempting to do is not supported by :memory_store in the development environment. Before moving to :mem_cache_store, I was able to find a work around. The work around is outlined below for those who do not have the option of using memcached. However, if you can use memcached, it is by far the better route to take.

My goal was to cut down on the number of database hit by caching the resultant ActiveRecord object in :memory_store.


#models/foo.rb
 def self.cool_widget
 Rails.cache.fetch('cache_object_name') { find_by_name('cool_widget') }
 end

#config/environments/development
 config.action_controller.perform_caching = true
 config.cache_store = :memory_store

#controler/foos_controllers.rb
 awesome_widget = foo.cool_widget
 logger.info awesome_widget.id

In script/console and in rails production, this works as you would expect. However, in the rails development environment, funny things happen. On the first page view, all looks good. On the second page view, as rails attempts to pull the object out of cache, rails cashes with the following error:

 .../activerecord/lib/active_record/attribute_methods.rb:142:in `create_time_zone_conversion_attribute?'
 

A google search lead me to:

https://rails.lighthouseapp.com/projects/8994/tickets/1290-activerecord-raises-randomly-apparently-a-timezone-issue

Here they talk known problems about caching Active Record objects on development servers. Something about the way the object are torn down between page loads to allow for every hit to get a new instance of the controller.

At the very end of the discussion (over the course of a year), I got the following work around:

http://gist.github.com/251886


# put this in lib/nil_store.rb:
 class NilStore < ActiveSupport::Cache::Store
 def initialize(location='//myloc'); end
 def read(name, options = nil); super; end
 def write(name, value, options = nil); super; end
 def delete(name, options = nil); super; end
 def delete_matched(matcher, options = nil); super; end
 end

# and then in environments/development.rb:

require 'nil_store'
 config.cache_store = NilStore.new

# or you could just turn on development class caching:

config.cache_classes = true

This “nil_store” routine basically prevent caching in the development environment which makes everything work. On production, I did not notice any problems.

Other work arounds to this bug include, not caching the ActiveRecord object or using a different caching routine, like memcached.

PriceChirp has improved wishlist support


This week I improved the Amazon wishlist support in PriceChirp. One of the cool features of PriceChirp from the beginning has been how easy it is to import an Amazon wishlist into PriceChirp. The only problem with this feature is it was an all or nothing proposition. Now, we have the ability to view our wishlists in PriceChirp and select which items we wish to import. The old feature of importing everything is still thee, but now we have options.

To see this feature in action, log into your PriceChirp account and do a wishlist search. This is done by searching for the email address associated to your Amazon wishlist. Once your wishlists are displayed, you can select “view wishlist” to get a listing of your items. From this page you can easily select which item to import into your personalized tracker.

Have fun!

PriceChirp tracks prices on Internationial Amazon sites


PriceChirp is growing. This week I added support to allow people to track prices and be alerted of changes for all the international Amazon sites. This includes Amazon US, Canada, France, Germany, Japan, and the UK. Just select the location of Amazon you are interested in searching, and use PriceChirp like normal. It was designed to make it easy to manage products from multiple sites at once. I’m hoping this design decision will pay off if in the future I add more vendors to PriceChirp.